Big-O notation is a generalization to describe the performance of an algorithm. Plain and simple. O(n) means that worst case scenario each of the given values in a data structure will need to be evaluated. O(n^2) means that in the worst case scenario, n^2 where ‘n’ is the number of values in a data structure will need to be evaluated (this is poor performance by the way, very costly computationally). And, O(log(n)) means that for a data structure containing ‘n’ values, worst case scenario the algorithm will need to evaluate log(n) before completion. This is really good performance by the way, characteristic of tree-type data structures, as traversal of a tree generally means branches are skipped or “trimmed” from the remaining values to be evaluated. If you are still confused, look at the graphs of each of the functions and imagine a very large data base, think Amazons product database, or a large companies transaction log database, the contents of which would be somewhere in the 10’s if not 100’s of millions, graphed you can imagine where n^2 would be — somewhere in the 10’s of trillions, for O(n) the graph is linear, so not great performance, but still better than exponential growth like O(n^2), and for logarithmic growth, 100 million values would require at most only 27 comparisons for a binary tree, less for a trinary tree (aka a “trie”) which is about 17, so really good performance, but still not the fastest… the best possible performance you can generalize with Big-O notation for an algorithm is O(1) which means that no matter how many values you have in a data structure it’s the exact same computational overhead to reach the desired node or index, this is characteristic of a hashing algorithm or hash table that computes the index of a particular value using a key and a hashing function that generates a unique index from the given key.
What's the difference between O(n) and O(n^2)
The speed of algorithms is often described in "big-o" notation, but what exactly does it mean?
2 Answers By Expert Tutors

Dean R. answered 07/06/24
Computer Scientist with 6+ Years of Experience
In simple terms, "big-o" notation describes how much time an algorithm takes to run relative to its input.
Let's define an algorithm as something that takes input and produces output. Addition, for example, is an algorithm that takes two inputs (the two addends, or the numbers being added) and produces one output (their sum).
Recall how you learned to add large numbers in school. You line up the digits and add up each one. Sometimes you have to carry a 1, but generally speaking it takes as many steps to add two numbers by hand as there are digits in the larger number.
This is an O(n) algorithm. The time it takes to solve the problem grows at the same pace as the input. To add two 100-digit numbers, it takes (about) 100 steps. Adding one digit to one of the inputs (specifically the larger one) adds one extra step.
Now consider multiplication. To multiply two numbers by hand, you first multiply each digit from one number with the ones digit of another. Then you move one digit to the right, and so on.
Multiplication (by hand) is an O(n^2) algorithm. To multiply 2 100-digit numbers, it takes about 10000 steps. That's because each digit from one number has to be multiplied by all 100 digits from the other. If you add 1 digit to one of the inputs, instead of just taking one extra step, you incur 100 extra steps. The time it takes to solve the problem grows much faster than the input.
Big-o notation is an extremely important concept in computer science. An algorithm can be "correct" insofar as it produces the correct answer for all inputs, but useless if it takes too long to run.
Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.