At Jim's request, I added a utility module to the standard library that implements state-of-the-art timing of code snippets.
Using a slightly modified version of this code, here's the cost in microseconds of one for loop iteration (with 'pass' as the loop body) in various Python versions. All tests were run on my home machine: a 664 MHz Pentium III with 256 KB cache, running Red Hat Linux 7.3, compiled with gcc 2.96. Note the steady improvement over the years. :-)
version plain -O ------- ----- ----- 1.3 0.625 n/a 1.4 0.602 n/a 1.5.2 0.606 0.466 2.0 0.561 0.445 2.1 0.591 0.436 2.2 0.416 0.277 2.3a2+ 0.246 0.248 (1)
The invocation was "python timeit.py -r5" (with -O added for the last column). This times 5 runs of a million iterations each and prints the time (normalized to usec per iteration) for the fastest run. I ran this twice for each combination and picked the lowest of the two; there was never more than 0.002 usec difference.
(1) A mystery: the Python 2.3 binary installed in /usr/local/bin measured 0.266 for the -O case, but 0.248 without -O; i.e. -O made it slower! The byte-for-byte identical binary in my build tree produced the more reasonable measurements given in the table.
--Guido van Rossum (home page: http://www.python.org/%7Eguido/)