A. Lloyd Flanagan
alloydflanagan at attbi.com
Wed Apr 16 21:58:30 CEST 2003
Peter van Kampen <news at datatailors.xs4all.nl> wrote in message news:<slrnb9q90a.mhg.news at datatailors.com>...
> In article <mailman.1050457621.2105.python-list at python.org>, Mike C.
> Fletcher wrote:
> I think I almost 'get it'. Except who or what decides what Big-O a
> certain algorithm has. Is that by 'hunch' or by running benchmarks or
> are there still other ways to recognize 'Big-O's (except for the
> formal computations that I understand are not very common)?
Technically the only way to determine the O() for a particular
algorithm is to do a complete mathematical analysis. One of the other
posts has an excellent simple example. You can do comparative times
for particular data sets, and fit a curve to the times, but that
doesn't guarantee that the algorithm will behave the same way for some
other set of data sets.
This mathematical analysis of an algorithm's behavior (and best-case,
average, and worst-case performance) can be trivial. For example, x +
1 is O(1). Or it can be so complex that proving the exact behavior is
a Ph.D. thesis, or even an unsolved problem.
> But these are not really 'algorithms'. Most algorithms are a lot more
> complex even when reduced to it's essentials. I understand why
> quicksort is better than bubblesort but I don't see an obvious way to
> label quicksort O(??).
Everything's an algorithm :). Quicksort analysis isn't terribly
complicated as these things go. There's got to be an analysis up on
the web somewhere.
More information about the Python-list