Comparing lists - somewhat OT, but still ...

Steven D'Aprano steve at REMOVETHIScyber.com.au
Sun Oct 16 18:01:43 CEST 2005


On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:

> Come to think of an experience that I shared
> with a student who was one of those highly
> creative experimentalists you seem to have
> in mind. He had just bought a new PC and
> wanted to check how fast its floating point
> unit was as compared to our VAX. After
> having done his wonderfully creative
> experimenting, he was utterly dejected: "Our (old)
> VAX is over 10'000 times faster than my new PC",
> he told me, almost in despair. 

Which it was. It finished executing his code in almost 1/10,000th of the
time his PC could do.

> Whereupon I,
> always the uncreative, dogmatic theoretician,
> who does not believe that much in the decisiveness
> of the outcome of mere experiments, told him
> that this was *impossible*, that he *must* have
> made a mistake...

It wasn't a mistake and it did happen. The VAX finished the calculation
10,000 times faster than his PC. You have a strange concept of "impossible".

 
>     It turned out that the VAX compiler had been
> clever enough to hoist his simple-minded test
> code out of the driving loop. 

Optimizations have a tendency to make a complete mess of Big O
calculations, usually for the better. How does this support your
theory that Big O is a reliable predictor of program speed?

For the record, the VAX 9000 can have up to four vector processors each
running at up to 125 MFLOPS each, or 500 in total. A Pentium III runs at
about 850 Mflops. Comparing MIPS or FLOPS from one system to another is
very risky, for many reasons, but as a very rough and ready measure
of comparison, a four processor VAX 9000 is somewhere about the
performance of a P-II or P-III, give or take some fudge factor.

So, depending on when your student did this experiment, it is entirely
conceivable that the VAX might have been faster even without the
optimization you describe. Of course, you haven't told us what model VAX,
or how many processors, or what PC your student had, so this comparison
might not be relevant.



> In fact, our VAX
> calculated the body of the loop only *once*
> and thus *immediately* announced that it had finished
> the whole test - the compiler on this student's
> PC, on the other hand, had not been clever enough
> for this type of optimization: hence the difference...

Precisely. And all the Big O notation is the world will not tell you that.
Only an experiment will. Now, perhaps in the simple case of a bare loop
doing the same calculation over and over again, you might be able to
predict ahead of time what optimisations the compiler will do. But for
more complex algorithms, forget it.

This is a clear case of experimentation leading to the discovery
of practical results which could not be predicted from Big O calculations.
I find it quite mind-boggling that you would use as if it was a triumph
of abstract theoretical calculation when it was nothing of the sort.


>   I think this is really a cautionary tale for
> experimentalists: don't *believe* in the decisiveness
> of the outcomes your experiments, but try to *understand*
> them instead (i.e. relate them to your theoretical grasp
> of the situation)...

Or, to put it another way: your student discovered something by running an
experimental test of his code that he would never have learnt in a million
years of analysis of his algorithm: the VAX compiler was very cleverly
optimized.

The fact that your student didn't understand the problem well enough to
craft a good test of it is neither here nor there.



-- 
Steven.




More information about the Python-list mailing list