Big-O notation

sik0fewl xxdigitalhellxx at hotmail.com
Wed Apr 16 16:56:36 EDT 2003


Peter van Kampen <news at datatailors.xs4all.nl> wrote in message 
news:<slrnb9q90a.mhg.news at datatailors.com>...
>I think I almost 'get it'. Except who or what decides what Big-O a
>certain algorithm has. Is that by 'hunch' or by running benchmarks or
>are there still other ways to recognize 'Big-O's (except for the
>formal computations that I understand are not very common)? 

I'll try to give a simple explanation. Big-O is the upper bound of the time 
it will take to complete the function.

There's usually a variable factor in determining how long an operation will 
take.

Inserting into a hash table:
O(1) - constant time. This means that the amount of time it takes to insert 
into a hash table is not dependent on the size of the table. This is because 
the position to insert the item is calculated (the hash) and then inserted. 
This, of course, changes when there starts being collisions.

Inserting into the first free spot of an array:
O(n) - linear time. n is the size of the array. This means when you cycle 
through the array you have to go through n items (at the most).

And it carries on. I forget exactly how to calculate them, but you can 
understand the simple ones.

Here's some really simple code to show different Big-O's

n = 24

# O(1)
# constant time
print n

# O(n)
# linear time
for x in range(0,n):
     print x

# O(n^2)
# quadratic time
for x in range(0,n):
     for y in range(0,n):
         print x, y

# O(x^n)
# exponential time
for y in range(0, x**n):
     print y

Keep in mind that Big-O is the *upper bound*. These all illustrate the 
maximum time. Any real code is most likely going to reach it's "destination" 
before it hits the end (though not always).

-- 
Hope this didn't make you more confused than ever,
Ryan





More information about the Python-list mailing list