# Big-O notation

A.M. Kuchling amk at amk.ca
Wed Apr 16 15:25:37 CEST 2003

```On Wed, 16 Apr 2003 08:50:42 -0400,
Roy Smith <roy at panix.com> wrote:
<excellent discussion deleted...>
> algorithm tuning.  Imagine you've got a program which runs in O(N^2)
> time.  If you could find a way to reduce it to O(N), for an input set of
> just 10 items, you would have speeded it up by a factor of 10!  Compared

... except for one nit; the above inference isn't valid.  O() notation
usually ignores differing multipliers because they don't matter much as N
grows larger, but they still have an effect on the running time.

Consider two algorithms: A is O(N) where each loop takes one second, and B
is O(N**2) where each loop takes .001 second.  For 10 items, A takes 10
seconds and B takes .1 sec to run, so switching from A to B would actually
run slower.  The situation changes for N=1000, where A takes 1000 seconds
and so does B.  For N=2000, A takes 2000sec and B takes 4000sec, twice as
long.

Algorithms often achieve better O() performance at the cost of making the
loop more complicated or requiring some precomputing step, so for small
problem sizes the straightforward algorithm might be faster.  Better O()
performance buys you something at larger problem sizes, of course.

(BTW, you should turn your post into a web page and put it somewhere; it
might be useful for educators.)

> complexity and algorithmic complexity.  This, of course, is what makes
> high level languages so useful in the first place, but I fear we may be
> training a new generation of programmers for whom algorithmic complexity
> is not as familiar a concept as it used to be.

I don't see that as being a problem; students can first learn the ideas of
big-O notation by counting the number of Python operations, ignoring the
wall-time performance of those operations.

--amk                                                             (www.amk.ca)
Now I'll never know if I was right.
-- Adric's last words, in "Earthshock"

```