John R. answered 08/20/24
Computer Science was my major at UCI
Big-O notation is a generalization to describe the performance of an algorithm. Plain and simple. O(n) means that worst case scenario each of the given values in a data structure will need to be evaluated. O(n^2) means that in the worst case scenario, n^2 where ‘n’ is the number of values in a data structure will need to be evaluated (this is poor performance by the way, very costly computationally). And, O(log(n)) means that for a data structure containing ‘n’ values, worst case scenario the algorithm will need to evaluate log(n) before completion. This is really good performance by the way, characteristic of tree-type data structures, as traversal of a tree generally means branches are skipped or “trimmed” from the remaining values to be evaluated. If you are still confused, look at the graphs of each of the functions and imagine a very large data base, think Amazons product database, or a large companies transaction log database, the contents of which would be somewhere in the 10’s if not 100’s of millions, graphed you can imagine where n^2 would be — somewhere in the 10’s of trillions, for O(n) the graph is linear, so not great performance, but still better than exponential growth like O(n^2), and for logarithmic growth, 100 million values would require at most only 27 comparisons for a binary tree, less for a trinary tree (aka a “trie”) which is about 17, so really good performance, but still not the fastest… the best possible performance you can generalize with Big-O notation for an algorithm is O(1) which means that no matter how many values you have in a data structure it’s the exact same computational overhead to reach the desired node or index, this is characteristic of a hashing algorithm or hash table that computes the index of a particular value using a key and a hashing function that generates a unique index from the given key.