When I first began learning computer algorithms, such as Bubble sort, big O notation was one of the hardest things to get my mind around.
To use Bubble sort as an example, the worst possible case is defined as
O(n²). Worst case meaning, if the values you were trying to sort were in complete reverse order. What I originally believed
O(n²) to mean is that for
n number of values, it would take
n² iterations in order to sort the values correctly. This is most definitely not the case.
Yesterday I noticed something odd. I wrote a quick implementation of Bubble sort and found that for n number of values, it took
n² - n iterations in order to sort the values. Therefore, I believed the worst case to be
O(n² - n), not
O(n²). I was confused by this because just about every place I looked, Bubble sort’s worst case was defined as
O(n²). I decided to visit Wikipedia and read up on big O notation.
I would like to point out, at this moment, that this year is my senior year of high school, and until now I had no real clue what a limit was. When I checked Wikipedia, surprise suprise, big O was not the worst possible number of iterations, but rather the limiting behavior of a function. Needless to say, I felt pretty silly.
If it’s still not clear,
O(n²) means that as
n increases towards infinity,
f(n) appears to approach, but not necessarily reach,
n². This explains why my Bubble sort function appeared to get close to
n², but never hit
Moral of the story? Learn some basic Calculus concepts prior to venturing very far into the realm of Computer Science.