Big-O notation

Ken R. kriley1 at
Wed Apr 16 15:58:34 CEST 2003

Peter van Kampen <news at> wrote in message news:< at>...
> In article <mailman.1050457621.2105.python-list at>, Mike C.
> Fletcher wrote:
> > Andrew Koenig wrote:
> > 
> > Okay, so now we have "The Practical Coder's Guide to big-O Notation" :) 
> > .  We rock ;) ,
> > Mike
> You do, you know...c.l.p really does.
> I think I almost 'get it'. Except who or what decides what Big-O a
> certain algorithm has. Is that by 'hunch' or by running benchmarks or
> are there still other ways to recognize 'Big-O's (except for the
> formal computations that I understand are not very common)? 
It's been nearly forever since my compsci algorithms class so be
gentle if I'm glaringly wrong :). Big O notation is a way to convey
the magnitude of work that it takes for an algorithm to do it's job
(ignoring any constant time). So, for a sufficiently large N, the work
required for an algorithm is something like N or NlogN etc.. No one
decides what order an algorithm is, it is an inherant property of the

If you execute quicksort on a wide variety of sample sets of different
sizes then measure and plot the processor time vs sample size, it will
approximate the graph of NlogN. Therefore the algorithm has order
NLogN. The same will apply to bubble sort (order N^2), etc.

I haven't seen the referenced Big-O posting but there is a decent
explanation of  Big-O and sorting algorithms here


More information about the Python-list mailing list