Big-O notation
Roy Smith
roy at panix.com
Wed Apr 16 08:50:42 EDT 2003
Peter van Kampen <news at datatailors.xs4all.nl> wrote:
> I think I almost 'get it'. Except who or what decides what Big-O a
> certain algorithm has. Is that by 'hunch' or by running benchmarks or
> are there still other ways to recognize 'Big-O's
You do it by analysis. Basicly, you find the "inner loop", i.e. the bit
of core code that gets run over and over while the algorithm does its
work. Then, you figure out (by looking at the code) how many times that
inner loop will run given that the input is a given size.
Sometimes, in a high level language like Python, there are implied loops
in what look like atomic operations. This can obscure the real
algorithmic complexity of a program, unless you know enough to look
deeper. Take the classic Python example:
s1 = ""
for s2 in stringList:
s1 += s2
On the surface, it looks like this should be an O(N) program, where N is
the number of strings in stringList. Certainly, for N strings, the loop
gets executed N times. But, the string addition has an couple of
inherent loops of its own. The most obvious is that each character in
s2 has to be copied someplace, so it really looks like this:
s1 = ""
for s2 in stringList:
for c in s2:
s1 += c
Now it looks like it takes O(k*N) steps, where N is again the number of
strings in stringList, and k is the average number of characters in a
string.
Here's the first tricky part -- for the purpose of big-O notation, the k
is meaningless. It's essentially a constant factor. For a given
universe of strings (say, words in a dictionary), k is not going to
change as you add more strings. Sure, it's going to have minor
variations as the average bops around, but once you've got a statisticly
valid sample of input strings, it'll pretty much stay put. k may be a
function of the kinds of strings you're giving it (k might be bigger for
German compared to English), but it's not a function of the number of
strings you're feeding the program. For big-O purposes, O(k*N) where k
is a constant is the same as O(N). What we're trying to describe is the
shape of the growth curve, not the scaling factors.
And, here's the second tricky part -- since strings in Python are
immutable, the way you do s1 += s2 is to figure out how long each of the
two components are, allocate a new string big enough to hold the sum of
those, and copy the two original strings into the new one. So, let's
tear that apart a bit and see what it looks like, complexity wise.
We need to figure out how long each component string is. On the
surface, that's O(N), where N is the length of the string. In C, it
looks something like:
length = 0;
while (*string != NULL) {
length++;
string++;
}
This takes 2*N steps (it does two additions on each of N trips through
the loop), but big-O says that O(2*N) is the same as O(N).
Fortunately for us, Python doesn't need to do that. Python stores
strings in a way where the length is pre-computed and stored. All it
really needs to do to get the length is something like "return
string.length", which is a constant-time operation (i.e. it takes the
same amount of time regardless of the length of the string). That's
called O(1).
So, we could rewrite the whole thing as something like:
s1 = ""
for s2 in stringList:
len1 = len(s1)
len2 = len(s2)
newLen = len1 + len2
temp = allocate buffer for newLen characters
for c in s1:
temp += c
for c in s2:
temp += c
s1 = temp
Now, we analyse the complexity of each bit of that. It does the outer
loop N times, but how much work is involved in each iteration of the
outer loop? Each of the len()'s we already figured out is constant
time, so they don't add anything complexity-wise. Adding the two
lengths is also constant time.
For the sake of argument, I'm going to assume that memory allocation is
constant time, although I don't really know that for sure. To really do
the analysis right, I'd have to know the details of how Python memory
management works, and I don't, but I'm reasonably confident that it's
not going to be a determining factor here.
Now we come to the first interesting part. The "for c in s1" loop takes
as many iterations as s1 is long. Well, how long is s1? It varies as
the program progresses, but after we've processed M strings, it's
approximately k*M characters long (where k is again the average string
length). We again toss out the constant k, and come up with the
complexity of this particular loop being O(M). But, what's M? It
varies from 1 to N, and on average, it's N/2. But, the constant factor
of 1/2 doesn't mean anything to big-O, so O(M) = O(N/2) = O(N).
The "for c in s2" loop we've already discussed and decided it was O(k),
which is the same as O(1), i.e. constant time.
And finally, the rebinding of s1 to the new temporary string is constant
time. One minor hitch is that this causes the old s1 to fall out of
scope and become a candidate for garbage collection. Again, without
knowing the details of Python's memory management, I can't say for sure
what the algorithmic complexity of freeing a string is, but for now I'm
going to assume it's O(1).
So, overall, we do an O(N) inner loop (plus a lot of constant-time stuff
that we're ignoring), O(N) times, giving us an overall algorithmic
complexity of O(N^2). This is often called quadratic behavior, and is
generally considered evil and something to be avoided if at all possible.
How evil? Well, I read recently that the new Python 2.3 release is
supposed to be 10% to 20% faster than 2.2. That sounds impressive (and,
don't get me wrong, it is impressive), but it's bupkis compared to
algorithm tuning. Imagine you've got a program which runs in O(N^2)
time. If you could find a way to reduce it to O(N), for an input set of
just 10 items, you would have speeded it up by a factor of 10! Compared
to upgrading your Python interpreter, you did 50-100 times better tuning
the algorithm. Now try to consider the implications of going from
O(N^2) to O(N) with 25,000 input items!
BTW, this is perhaps one of the few arguments against teaching Python
(or any high-level language) as a first programming language. So much
algorithmicly complex stuff gets pushed down into atomic language
constructs that there's no longer much of a correlation between code
complexity and algorithmic complexity. This, of course, is what makes
high level languages so useful in the first place, but I fear we may be
training a new generation of programmers for whom algorithmic complexity
is not as familiar a concept as it used to be.
I'm not saying we should still be teaching C (or assembler) as a first
language. On the other hand, I suspect analysis of algorithms probably
needs to be emphasized more (and earlier) in the curriculum. When I
first learned this stuff, it was mostly just a codification and rigorous
analysis of concepts I'd already explored experimentally in the normal
course of writing C programs. I suspect now it's rather completely new
ground, and may seem rather esoteric and theoretical and not
particularly relevant to real-world problems like designing java
animiations for web pages :-)
More information about the Python-list
mailing list