Re: [Python-Dev] Pie-thon benchmark code ready

Dan reported a problem with the benchmark on Mac OS X:
After some off-line email exchanges I think I have a fix for this behavior, which must have to do with a different length of the addresses shown in the default repr(), e.g. "<Foo object at 0xffff>". Version 1.0.1 of the benchmark is at the same place as before: ftp://python.org/pub/python/parrotbench/parrotbench.tgz (You can tell whether you have the fixed version by looking at first line of README.txt; if it says "Parrot benchmark 1.0.1" you do.) I haven't heard back from Dan, but I assume that the fix works. Happy New Year everyone! --Guido van Rossum (home page: http://www.python.org/~guido/)

At 3:34 PM -0800 12/31/03, Guido van Rossum wrote:
Yup, works just fine. (You caught me during the east coast commute home :)
Happy New Year everyone!
Ho, ho, ho! -- Dan --------------------------------------"it's like this"------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk

I suspect there are other folks who have run the pie-con benchmarks on their machines. Perhaps we should construct a chart. Times below are in seconds. ~60 Guido's ancient 650Mhz Pentium ~27 Guido's desktop at work ~15 IBM T40 laptop 88.550 dual 450 MHz Pentium 2 22.340 Athlon XP2800 (1243.363 MHz clock) The latter two times are for Python 2.3.3 built out of the box using the Makefile (make time) shipped with the parrotbenchmarks ftp file on an unloaded machine. The times reported are the user time from the time triplet.

Perhaps we can turn this into a benchmark comparison chart. In particular, in my experience pystone is a pretty good indicator of system performance even if it's a lousy benchmark. I'll report the pystone numbers for those same three systems: home desktop: 10438.4 pystones/second work desktop: 17421.6 pystones/second laptop: 30198.1 pystones/second Multiplying the pystone numbers with the parrotbench times should give a constant if the benchmarks are equivalent. But this doesn't look good: I get home desktop: 626304 work desktop: 470383 laptop: 452972 The home desktop is a clear outlier; it's more than twice as slow on the parrot benchmark, but only 2/3rds slower on pystone... (This is begging for a spreadsheet. :-) --Guido van Rossum (home page: http://www.python.org/~guido/)

On this box (3.05G P4, 512M, Fedora Core 1): Using Python2.3.3: real 0m16.027s user 0m14.940s sys 0m0.280s Pystone(1.1) time for 50000 passes = 1.43 This machine benchmarks at 34965 pystones/second Using current-CVS python (and the bytecode compiled with 2.3.3): real 0m14.556s user 0m14.130s sys 0m0.280s Pystone(1.1) time for 50000 passes = 1.41 This machine benchmarks at 35461 pystones/second (Hm: should the default number of passes for pystone be raised? One and a half seconds seems rather a short time for a benchmark...) -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.

Python 2.3.2 (#49) pystone test on my Win2k SP4 machine. Note this is a 1.6 ghz Pentium M (Centrino) Pystone(1.1) time for 50000 passes = 1.3762 This machine benchmarks at 36331.9 pystones/second Faster than Anthony's 3Ghz P4. I believe this processor has a very large on-board cache.. I'm not sure how large. It's a Dell Lattitude D800. My point being, I think pystone performance is greatly effected by CPU cache, more so than CPU clock speed. -- Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax http://www.wecanstopspam.org/ AOL-IM: BKClements

My goal with this benchmark was not to compare CPUs but to see whether the pystone count times the time for the parrotbench is ~constant, and if it's not, what influences there are on the product. Can you report the parrotbench times? ftp://ftp.python.org/pub/python/parrotbench/parrotbench.tgz I'm only interested in the user time. --Guido van Rossum (home page: http://www.python.org/~guido/)

On 2 Jan 2004 at 9:57, Guido van Rossum wrote:
Pystone(1.1) time for 50000 passes = 1.3762 This machine benchmarks at 36331.9 pystones/second
Can you report the parrotbench times?
I'm on windows and don't have a 'time' executable, so I created a .bat file like this: O:\Python\parrot>type timit.bat echo "start" time < nul python -O b.py > xout echo "end" time < nul It said: start 14:23:54.85 end 14.24.10.09 that's about 16 seconds -- Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax http://www.wecanstopspam.org/ AOL-IM: BKClements

Thanks! Foer others wanting to do the same on Windows, the latest parrotbench distro (1.0.4) contains a program t.py that should make this a bit easier: you can simply invoke python -O t.py (The version of t.py included in 1.0.3 was faulty. :-) --Guido van Rossum (home page: http://www.python.org/~guido/)

I fixed up my t.py to time both the imports and b.main() It reports .. 15.188 seconds on my P(M) 1.6 GHZ machine -- Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax http://www.wecanstopspam.org/ AOL-IM: BKClements

I fixed up my t.py to time both the imports and b.main()
Right, that's what the latest version (1.0.4) now does.
It reports .. 15.188 seconds on my P(M) 1.6 GHZ machine
Hm... My IBM T40 with 1.4 GHz P(M) reports 15.608. I bet the caches are more similar, and affect performance more than CPU speed... --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum <guido@python.org> writes:
Hm... My IBM T40 with 1.4 GHz P(M) reports 15.608. I bet the caches are more similar, and affect performance more than CPU speed...
You have a 32K L1 and a 1024K (!) L2. What a great machine! As an example of the other end of the spectrum, I'm running current, but low end hardware: a 2.2 GHz Celeron, 400 MHz FSB. 256MB DDR SDRAM. Verified working as expected by Intel Processor Speed Test. My L1 is 12K trace, 8K data. My L2 is only 128K, and when there's a miss there's no L3 to fall back on. Cost for processor and mobo, new, $120. I find this setup pretty snappy for what I do on it: development and home server. It's definitely not my game machine :-) Python 2.2.1 (#1, Oct 17 2003, 16:36:36) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 [best of 3]: Pystone(1.1) time for 10000 passes = 0.93 This machine benchmarks at 10752.7 pystones/second Python 2.3.3 (#15, Jan 2 2004, 14:39:36) [best of 3]: Pystone(1.1) time for 50000 passes = 3.46 This machine benchmarks at 14450.9 pystones/second Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [current cvs] [best of 3]: Pystone(1.1) time for 50000 passes = 2.91 This machine benchmarks at 17182.1 pystones/second (but see the p.s. below) Now the parrotbench, version 1.04. [make extra passes to get .pyo first] First, python 2.3.3: best 3: 31.1/31.8/32.3 Next, python 2.4a0, current cvs: best 3: 31.8/31.9/32.1 Since I noticed quite different ratios between the individual tests compared to what was posted by Seo Sanghyeon on the pypy list, here's my numbers (2.4a0): hydra /home/kbk/PYTHON/python/nondist/sandbox/parrotbench$ make times for i in 0 1 2 3 4 5 6; do echo b$i.py; time /home/kbk/PYSRC/python b$i.py >@out$i; cmp @out$i out$i; done b0.py 5.48s real 5.30s user 0.05s system b1.py 1.36s real 1.22s user 0.10s system b2.py 0.44s real 0.42s user 0.04s system b3.py 2.01s real 1.94s user 0.04s system b4.py 1.69s real 1.63s user 0.05s system b5.py 4.80s real 4.73s user 0.02s system b6.py 1.84s real 1.56s user 0.26s system I notice that some of these tests are a little faster on 2.3.3 while others are faster on 2.4, resulting in the overall time being about the same on both releases. N.B. compiling Python w/o the stack protector doesn't make a noticeable difference ;-) There may be some other problem with this box that I haven't yet discovered, but right now I'm blaming the tiny cache for performance being 2 - 3 x lower than expected from the clock rate, compared to what others are getting. -- KBK p.s. I saw quite a large outlier on 2.4 pystone when I first tried it. I didn't believe it, but was able to scroll back and clip it: Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information.
This is 30% lower than the rate quoted above. I haven't been able to duplicate it. Maybe the OS or X was doing something which tied up the cache. This is a fairly lightly loaded machine running X, Ion, and emacs. I've also seen 20% variations in the 2.2.1 pystone benchmark. It seems to me that this benchmark is pretty cache sensitive and should be done on an unloaded system, preferable x/o X, and with the results averaged over many random trials if comparisions are desired, especially if the cache is small. I don't see the same variation in the parrotbench. It's just consistently low for this box.

Guido van Rossum <guido@python.org> writes:
Hm... My IBM T40 with 1.4 GHz P(M) reports 15.608. I bet the caches are more similar, and affect performance more than CPU speed...
You have a 32K L1 and a 1024K (!) L2. What a great machine! As an example of the other end of the spectrum, I'm running current, but low end hardware: a 2.2 GHz Celeron, 400 MHz FSB. 256MB DDR SDRAM. Verified working as expected by Intel Processor Speed Test. My L1 is 12K trace, 8K data. My L2 is only 128K, and when there's a miss there's no L3 to fall back on. Cost for processor and mobo, new, $120. I find this setup pretty snappy for what I do on it: development and home server. It's definitely not my game machine :-) Python 2.2.1 (#1, Oct 17 2003, 16:36:36) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 [best of 3]: Pystone(1.1) time for 10000 passes = 0.93 This machine benchmarks at 10752.7 pystones/second Python 2.3.3 (#15, Jan 2 2004, 14:39:36) [best of 3]: Pystone(1.1) time for 50000 passes = 3.46 This machine benchmarks at 14450.9 pystones/second Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [current cvs] [best of 3]: Pystone(1.1) time for 50000 passes = 2.91 This machine benchmarks at 17182.1 pystones/second (but see the p.s. below) Now the parrotbench, version 1.04. [make extra passes to get .pyo first] First, python 2.3.3: best 3: 31.1/31.8/32.3 Next, python 2.4a0, current cvs: best 3: 31.8/31.9/32.1 Since I noticed quite different ratios between the individual tests compared to what was posted by Seo Sanghyeon on the pypy list, here's my numbers (2.4a0): hydra /home/kbk/PYTHON/python/nondist/sandbox/parrotbench$ make times for i in 0 1 2 3 4 5 6; do echo b$i.py; time /home/kbk/PYSRC/python b$i.py >@out$i; cmp @out$i out$i; done b0.py 5.48s real 5.30s user 0.05s system b1.py 1.36s real 1.22s user 0.10s system b2.py 0.44s real 0.42s user 0.04s system b3.py 2.01s real 1.94s user 0.04s system b4.py 1.69s real 1.63s user 0.05s system b5.py 4.80s real 4.73s user 0.02s system b6.py 1.84s real 1.56s user 0.26s system I notice that some of these tests are a little faster on 2.3.3 while others are faster on 2.4, resulting in the overall time being about the same on both releases. N.B. compiling Python w/o the stack protector doesn't make a noticeable difference ;-) There may be some other problem with this box that I haven't yet discovered, but right now I'm blaming the tiny cache for performance being 2 - 3 x lower than expected from the clock rate, compared to what others are getting. -- KBK p.s. I saw quite a large outlier on 2.4 pystone when I first tried it. I didn't believe it, but was able to scroll back and clip it: Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information.
This is 30% lower than the rate quoted above. I haven't been able to duplicate it. Maybe the OS or X was doing something which tied up the cache. This is a fairly lightly loaded machine running X, Ion, and emacs. I've also seen 20% variations in the 2.2.1 pystone benchmark. It seems to me that this benchmark is pretty cache sensitive and should be done on an unloaded system, preferable x/o X, and with the results averaged over many random trials if comparisions are desired, especially if the cache is small. I don't see the same variation in the parrotbench. It's just consistently low for this box.

Another small set of pystone and parrotbench results... Best of 7 runs on a standard windows build of Python on windows 2000: Processor P4 2.26ghz P4 Cel 1.7ghz P2 Cel 400mhz bus 533 mhz 400 mhz 66 mhz l2cache 512k 128k 128k memory 1G DualChan DDR333 256M DDR200 384M SDR66 2.2.2 Pystone ~25600 ~18500 ~6086 2.3.2 Pystone ~31800 ~6830 2.3.2 Parrot 16.06s 83.4s 2.3.2 Pys*Par 510708 569622 A few things to note: I don't have consistant access to the 1.7ghz celeron, which is why it didn't get any 2.3 tests. The p2 celeron is a dual processor system. There does not seem to be an easy way to give a process one processor affinity on the command line. However, even inserting a pause into pystone in order to alter processor affinity does not seem to appreciably alter the pystone speed. This may suggest that Python has a working set so large that it doesn't fit into the 128k of L2 cache on either of the celerons. This is supported by the fact that based on the python 2.2.2 pystone results, the 2.26 P4 performs like a 2.35 ghz P4 celeron (256/185*1.7), which could be due to the larger cache, higher speed memory, or more likely both. It is also likely that the P4s aren't as fast per clock as the P2 due to the heavy branch misprediction penalties that the architecture suffers from. Strange thing: The 2.26 ghz P4 gains more in the Pystone benchmark from the 2.2.2 -> 2.3.2 transition than the celeron 400, 24% and 12% increases in Pystones respectively. - Josiah

On Thu, Jan 01, 2004 at 12:59:08PM -0500, Brad Clements wrote:
yes, the pentium M is intel's fastest CPU though they don't like to admit it lest people stop buying P4s. current models have a 1mb L2 cache and an well optimized p3-derived core. cpu cache size will have an effect on any benchmark. if one python implementation outperforms another because of a better cache foot print thats good thing. cache architecture will only become more important as time goes on.

On Thursday 01 January 2004 07:22 am, Anthony Baxter wrote: ...
(Hm: should the default number of passes for pystone be raised? One and a half seconds seems rather a short time for a benchmark...)
An old rule of thumb, when benchmarking something on many machines that exhibit a wide range of performance, is to try to organize the benchmarks in terms, not of completing a set number of repetitions, but rather of running as many repetitions as feasible within a (more or less) set time. Since, at the end, you'll be reporting in terms of "<whatever>s per second", it should make no real difference BUT it makes it more practical to run the "same" benchmark on machines (or more generally implementations) spanning orders of magnitude in terms of the performance they exhibit. (If the check for "how long have I been running so far" is very costly, you'll of course do it only "once every N repetitions" rather than doing it at every single pass). Wanting to run pystone on pypy, but not wanting to alter it TOO much from the CPython original so as to keep the results easily comparable, I just made the number of loops an optional command line parameter of the pystone script (as checked in within the pypy project). Since the laptops we had around at the Amsterdam pypy sprint ran about 0.8 to 3 pystones/sec with pypy (a few thousand times slower than with CPython 2.3.*), running all of the 50,000 iterations was just not practical at all. Anyway, the ability of optionally passing in the number of iterations on the command line would also help with your opposite problem of too-fast machines -- if 50k loops just aren't enough for a reasonably-long run, you could use more. Alex

Yup. As a matter of historical detail, pystone used to have LOOPS set to 1000; in 1997 I changed it to 10K, and in 2002 I bumped it again to 50K. BTW, I'd gladly receive your patch for parameterizing LOOPS for inclusion into the standard Python library. --Guido van Rossum (home page: http://www.python.org/~guido/)

On Friday 02 January 2004 04:18 pm, Guido van Rossum wrote:
OK, I committed the modified pystone.py directly (I hope the change is small and "marginal" enough not to need formal review -- easy enough to back off if I've made some 'oops', anyway...). Alex

Thanks! Looks benign enough, even though my ideas about main() have evolved some. (http://www.artima.com/weblogs/viewpost.jsp?thread=4829) But that would be harder to fold into the existing pystone framework, so I'd say let's leave this alone. --Guido van Rossum (home page: http://www.python.org/~guido/)

Probably. Maybe you can try the new pystone from CVS which has a command line option. (If I were to write it over I'd use the strategy from timeit, which has a self-adjusting strategy to pick an appropriate number of loops.) I multiplied the two numbers (pystones/sec and parrotbench seconds) for your two runs and found the product to be much higher for your first run than for the second. This is suspicious (perhaps points at too much variation in the pystone timing); for contrast, Skip's two runs before and after rebooting give a product within 0.05 percent of each other). Hm, of course this could also have to do with Python versions. Let me try... Yes, it looks like for me, on one machine I have handy here, Python 2.3.3 gives a higher product than current CVS (though not by as much as you measured). So what does this product mean? It's higher if pystone is faster, or parrotbench is slower. Any factor that makes both faster (or slower) by the same amount has no effect. Ah (thinking aloud here) the unit is "pystones". Not pystones per second, just pystones. And this leads to an interpretation: it is how many pystone loops execute in the time needed to complete the parrotbench benchmark (so the unit is really "pystones per parrotbench"). So what makes a machine run more pystones in a parrotbench? I don't know enough about CPUs and caches to draw a conclusion, and I've got to start doing real work today... --Guido van Rossum (home page: http://www.python.org/~guido/)

At 3:56 PM -0800 12/31/03, Dennis Allison wrote:
Well, add in: real 1m20.412s user 1m1.580s sys 0m2.100s for this somewhat creaky 600MHz G3 iBook... -- Dan --------------------------------------"it's like this"------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk

I've been looking at user times only, but on that box the discrepancy between user and real time is enormous! It also suggests that a 600 MHz G3 and a 650 P3 are pretty close, and contrary to what Apple seems to claim, the G3's MHz rating isn't worth much more than a P3's MHz rating. Could you run pystone too? python -c 'from test.pystone import main; main(); main(); main()' and then report the smallest pystones/second value. --Guido van Rossum (home page: http://www.python.org/~guido/)

At 4:26 PM -0800 12/31/03, Guido van Rossum wrote:
Yeah, that's not uncommon. I'm not sure if there's a problem with the time command, or if it's something else. This is a laptop (currently on battery power, though with reduced performance turned off) and it's got a 100MHz main memory bus, which is definitely a limiting factor once code slips out of L1 cache. It's faster than my Gameboy, but some days I wonder how much... :)
Possibly. I'm not sure it's necessarily the best machine for that comparison, given the power/performance tradeoffs with laptops. I may give it a whirl on one of the G3 desktop machines around here to see how much bus speed matters. (I won't be surprised, given the likely working set for an interpreter, to find that bus speed makes more of a difference than CPU speed)
Smallest is: Pystone(1.1) time for 50000 passes = 6.61 This machine benchmarks at 7564.3 pystones/second -- Dan --------------------------------------"it's like this"------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk

Thanks -- though I made a mistake -- I should've asked for the largest pystone value -- larger is better for pystone, unlike for running times. Anyway, this value is reasonable given your parrotbench time. --Guido van Rossum (home page: http://www.python.org/~guido/)

[Dennis Allison]
[Guido van Rossum]
I've been looking at user times only, but on that box the discrepancy between user and real time is enormous!
Some data from an older machine, 199 MhZ Pentium Pro, 128 MB, Fedora 1: $ time python2.3 -O 'b.py' >@out real 2m8.363s user 2m53.420s sys 0m1.980s $ python2.3 -c 'from test.pystone import main; main(); main(); main()' Pystone(1.1) time for 50000 passes = 17 This machine benchmarks at 2941.18 pystones/second (showing only the fastest) I know nothing about benchmarking, and I don't know whether anyone can use this data either, but it sure won't hurt anybody either ;) Happy 2004, Gerrit Holl. -- 274. If any one hire a skilled artizan, he shall pay as wages of the ... five gerahs, as wages of the potter five gerahs, of a tailor five gerahs, of ... gerahs, ... of a ropemaker four gerahs, of ... gerahs, of a mason ... gerahs per day. -- 1780 BC, Hammurabi, Code of Law -- Asperger's Syndrome - a personal approach: http://people.nl.linux.org/~gerrit/english/

Dennis> I suspect there are other folks who have run the pie-con Dennis> benchmarks on their machines. Perhaps we should construct a Dennis> chart. 800 MHz Ti PowerBook (G4), using Python from CVS, output of make (second run of two, to make sure pyo files were already generated): time python -O b.py >@out real 0m49.999s user 0m46.610s sys 0m1.030s cmp @out out The best-of-three pystone measurement is 10964.9 pystones/second. The Makefile should probably parameterize "python" like so: PYTHON = python time: time $(PYTHON) -O b.py >@out cmp @out out ... so people can specify precisely which version of Python to use. Skip

Skip Montanaro <skip@pobox.com> writes:
Bloody hell, that about what I get on my 600Mhz G3 iBook (same model as Dan's, sounds like). Does your TiBook have no cache or a *really* slow bus or something? Cheers, mwh -- I've reinvented the idea of variables and types as in a programming language, something I do on every project. -- Greg Ward, September 1998

>> 800 MHz Ti PowerBook (G4), using Python from CVS, output of make >> (second run of two, to make sure pyo files were already generated): >> >> time python -O b.py >@out >> >> real 0m49.999s >> user 0m46.610s >> sys 0m1.030s >> cmp @out out >> >> The best-of-three pystone measurement is 10964.9 pystones/second. Michael> Bloody hell, that about what I get on my 600Mhz G3 iBook (same Michael> model as Dan's, sounds like). Does your TiBook have no cache Michael> or a *really* slow bus or something? The Apple System Profiler says my bus speed is 133MHz. I have a 256K L2 cache and a 1MB L3 cache. The machine has 1GB of RAM. Some rather simple operations slow down considerably after the system's been up awhile (and I do tend to leave it up for days or weeks at a time). I don't recall how long it had been up when I ran those tests. I just ran pystone again - it's been up 2 days, 19 hrs at the moment - and got significantly better numbers: % python ~/src/python/head/dist/src/Lib/test/pystone.py Pystone(1.1) time for 50000 passes = 3.82 This machine benchmarks at 13089 pystones/second % python ~/src/python/head/dist/src/Lib/test/pystone.py Pystone(1.1) time for 50000 passes = 3.83 This machine benchmarks at 13054.8 pystones/second % python ~/src/python/head/dist/src/Lib/test/pystone.py Pystone(1.1) time for 50000 passes = 3.78 This machine benchmarks at 13227.5 pystones/second Rerunning the parrotbench code shows a decided improvement as well: % make time python -O b.py >@out real 0m40.018s user 0m38.620s sys 0m0.900s cmp @out out Skip

On Fri, Jan 02, 2004, Skip Montanaro wrote:
What version of OS X and developer tools? I'm a tiny bit surprised that you say your machine stays up for weeks at a time; that implies you don't install security updates. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

>> Some rather simple operations slow down considerably after the >> system's been up awhile (and I do tend to leave it up for days or >> weeks at a time). I don't recall how long it had been up when I ran >> those tests. I just ran pystone again - it's been up 2 days, 19 hrs >> at the moment - and got significantly better numbers: aahz> What version of OS X and developer tools? I'm a tiny bit aahz> surprised that you say your machine stays up for weeks at a time; aahz> that implies you don't install security updates. On the contrary. Apple doesn't release security updates all that often. I run Software Update daily. If the presence of new software updates was the only reason to reboot I'd almost never reboot. <wink> Just to make sure that was the case I just ran Software Update manually. Nothing is out-of-date (well, except for the fact that I haven't coughed up the dough for 10.3.x yet). I'm running an up-to-date version of 10.2.8. Skip

System is an AMD Athlon XP1600+ (1.4GHz) with 512MB PC2100 RAM, running FreeBSD 4.8. ---------------------------------------------------------------- $ python2.3 Python 2.3.3 (#4, Jan 4 2004, 12:39:32) [GCC 2.95.4 20020320 [FreeBSD]] on freebsd4 Type "help", "copyright", "credits" or "license" for more information.
^D
$ make time time python2.3 -O b.py >@out 21.19 real 20.88 user 0.26 sys cmp @out out $ python2.3 /usr/local/lib/python2.3/test/pystone.py Pystone(1.1) time for 50000 passes = 1.89062 This machine benchmarks at 26446.3 pystones/second -- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac@bullseye.apana.org.au (pref) | Snail: PO Box 370 andymac@pcug.org.au (alt) | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia

At 3:34 PM -0800 12/31/03, Guido van Rossum wrote:
Yup, works just fine. (You caught me during the east coast commute home :)
Happy New Year everyone!
Ho, ho, ho! -- Dan --------------------------------------"it's like this"------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk

I suspect there are other folks who have run the pie-con benchmarks on their machines. Perhaps we should construct a chart. Times below are in seconds. ~60 Guido's ancient 650Mhz Pentium ~27 Guido's desktop at work ~15 IBM T40 laptop 88.550 dual 450 MHz Pentium 2 22.340 Athlon XP2800 (1243.363 MHz clock) The latter two times are for Python 2.3.3 built out of the box using the Makefile (make time) shipped with the parrotbenchmarks ftp file on an unloaded machine. The times reported are the user time from the time triplet.

Perhaps we can turn this into a benchmark comparison chart. In particular, in my experience pystone is a pretty good indicator of system performance even if it's a lousy benchmark. I'll report the pystone numbers for those same three systems: home desktop: 10438.4 pystones/second work desktop: 17421.6 pystones/second laptop: 30198.1 pystones/second Multiplying the pystone numbers with the parrotbench times should give a constant if the benchmarks are equivalent. But this doesn't look good: I get home desktop: 626304 work desktop: 470383 laptop: 452972 The home desktop is a clear outlier; it's more than twice as slow on the parrot benchmark, but only 2/3rds slower on pystone... (This is begging for a spreadsheet. :-) --Guido van Rossum (home page: http://www.python.org/~guido/)

On this box (3.05G P4, 512M, Fedora Core 1): Using Python2.3.3: real 0m16.027s user 0m14.940s sys 0m0.280s Pystone(1.1) time for 50000 passes = 1.43 This machine benchmarks at 34965 pystones/second Using current-CVS python (and the bytecode compiled with 2.3.3): real 0m14.556s user 0m14.130s sys 0m0.280s Pystone(1.1) time for 50000 passes = 1.41 This machine benchmarks at 35461 pystones/second (Hm: should the default number of passes for pystone be raised? One and a half seconds seems rather a short time for a benchmark...) -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.

Python 2.3.2 (#49) pystone test on my Win2k SP4 machine. Note this is a 1.6 ghz Pentium M (Centrino) Pystone(1.1) time for 50000 passes = 1.3762 This machine benchmarks at 36331.9 pystones/second Faster than Anthony's 3Ghz P4. I believe this processor has a very large on-board cache.. I'm not sure how large. It's a Dell Lattitude D800. My point being, I think pystone performance is greatly effected by CPU cache, more so than CPU clock speed. -- Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax http://www.wecanstopspam.org/ AOL-IM: BKClements

My goal with this benchmark was not to compare CPUs but to see whether the pystone count times the time for the parrotbench is ~constant, and if it's not, what influences there are on the product. Can you report the parrotbench times? ftp://ftp.python.org/pub/python/parrotbench/parrotbench.tgz I'm only interested in the user time. --Guido van Rossum (home page: http://www.python.org/~guido/)

On 2 Jan 2004 at 9:57, Guido van Rossum wrote:
Pystone(1.1) time for 50000 passes = 1.3762 This machine benchmarks at 36331.9 pystones/second
Can you report the parrotbench times?
I'm on windows and don't have a 'time' executable, so I created a .bat file like this: O:\Python\parrot>type timit.bat echo "start" time < nul python -O b.py > xout echo "end" time < nul It said: start 14:23:54.85 end 14.24.10.09 that's about 16 seconds -- Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax http://www.wecanstopspam.org/ AOL-IM: BKClements

Thanks! Foer others wanting to do the same on Windows, the latest parrotbench distro (1.0.4) contains a program t.py that should make this a bit easier: you can simply invoke python -O t.py (The version of t.py included in 1.0.3 was faulty. :-) --Guido van Rossum (home page: http://www.python.org/~guido/)

I fixed up my t.py to time both the imports and b.main() It reports .. 15.188 seconds on my P(M) 1.6 GHZ machine -- Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax http://www.wecanstopspam.org/ AOL-IM: BKClements

I fixed up my t.py to time both the imports and b.main()
Right, that's what the latest version (1.0.4) now does.
It reports .. 15.188 seconds on my P(M) 1.6 GHZ machine
Hm... My IBM T40 with 1.4 GHz P(M) reports 15.608. I bet the caches are more similar, and affect performance more than CPU speed... --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum <guido@python.org> writes:
Hm... My IBM T40 with 1.4 GHz P(M) reports 15.608. I bet the caches are more similar, and affect performance more than CPU speed...
You have a 32K L1 and a 1024K (!) L2. What a great machine! As an example of the other end of the spectrum, I'm running current, but low end hardware: a 2.2 GHz Celeron, 400 MHz FSB. 256MB DDR SDRAM. Verified working as expected by Intel Processor Speed Test. My L1 is 12K trace, 8K data. My L2 is only 128K, and when there's a miss there's no L3 to fall back on. Cost for processor and mobo, new, $120. I find this setup pretty snappy for what I do on it: development and home server. It's definitely not my game machine :-) Python 2.2.1 (#1, Oct 17 2003, 16:36:36) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 [best of 3]: Pystone(1.1) time for 10000 passes = 0.93 This machine benchmarks at 10752.7 pystones/second Python 2.3.3 (#15, Jan 2 2004, 14:39:36) [best of 3]: Pystone(1.1) time for 50000 passes = 3.46 This machine benchmarks at 14450.9 pystones/second Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [current cvs] [best of 3]: Pystone(1.1) time for 50000 passes = 2.91 This machine benchmarks at 17182.1 pystones/second (but see the p.s. below) Now the parrotbench, version 1.04. [make extra passes to get .pyo first] First, python 2.3.3: best 3: 31.1/31.8/32.3 Next, python 2.4a0, current cvs: best 3: 31.8/31.9/32.1 Since I noticed quite different ratios between the individual tests compared to what was posted by Seo Sanghyeon on the pypy list, here's my numbers (2.4a0): hydra /home/kbk/PYTHON/python/nondist/sandbox/parrotbench$ make times for i in 0 1 2 3 4 5 6; do echo b$i.py; time /home/kbk/PYSRC/python b$i.py >@out$i; cmp @out$i out$i; done b0.py 5.48s real 5.30s user 0.05s system b1.py 1.36s real 1.22s user 0.10s system b2.py 0.44s real 0.42s user 0.04s system b3.py 2.01s real 1.94s user 0.04s system b4.py 1.69s real 1.63s user 0.05s system b5.py 4.80s real 4.73s user 0.02s system b6.py 1.84s real 1.56s user 0.26s system I notice that some of these tests are a little faster on 2.3.3 while others are faster on 2.4, resulting in the overall time being about the same on both releases. N.B. compiling Python w/o the stack protector doesn't make a noticeable difference ;-) There may be some other problem with this box that I haven't yet discovered, but right now I'm blaming the tiny cache for performance being 2 - 3 x lower than expected from the clock rate, compared to what others are getting. -- KBK p.s. I saw quite a large outlier on 2.4 pystone when I first tried it. I didn't believe it, but was able to scroll back and clip it: Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information.
This is 30% lower than the rate quoted above. I haven't been able to duplicate it. Maybe the OS or X was doing something which tied up the cache. This is a fairly lightly loaded machine running X, Ion, and emacs. I've also seen 20% variations in the 2.2.1 pystone benchmark. It seems to me that this benchmark is pretty cache sensitive and should be done on an unloaded system, preferable x/o X, and with the results averaged over many random trials if comparisions are desired, especially if the cache is small. I don't see the same variation in the parrotbench. It's just consistently low for this box.

Guido van Rossum <guido@python.org> writes:
Hm... My IBM T40 with 1.4 GHz P(M) reports 15.608. I bet the caches are more similar, and affect performance more than CPU speed...
You have a 32K L1 and a 1024K (!) L2. What a great machine! As an example of the other end of the spectrum, I'm running current, but low end hardware: a 2.2 GHz Celeron, 400 MHz FSB. 256MB DDR SDRAM. Verified working as expected by Intel Processor Speed Test. My L1 is 12K trace, 8K data. My L2 is only 128K, and when there's a miss there's no L3 to fall back on. Cost for processor and mobo, new, $120. I find this setup pretty snappy for what I do on it: development and home server. It's definitely not my game machine :-) Python 2.2.1 (#1, Oct 17 2003, 16:36:36) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 [best of 3]: Pystone(1.1) time for 10000 passes = 0.93 This machine benchmarks at 10752.7 pystones/second Python 2.3.3 (#15, Jan 2 2004, 14:39:36) [best of 3]: Pystone(1.1) time for 50000 passes = 3.46 This machine benchmarks at 14450.9 pystones/second Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [current cvs] [best of 3]: Pystone(1.1) time for 50000 passes = 2.91 This machine benchmarks at 17182.1 pystones/second (but see the p.s. below) Now the parrotbench, version 1.04. [make extra passes to get .pyo first] First, python 2.3.3: best 3: 31.1/31.8/32.3 Next, python 2.4a0, current cvs: best 3: 31.8/31.9/32.1 Since I noticed quite different ratios between the individual tests compared to what was posted by Seo Sanghyeon on the pypy list, here's my numbers (2.4a0): hydra /home/kbk/PYTHON/python/nondist/sandbox/parrotbench$ make times for i in 0 1 2 3 4 5 6; do echo b$i.py; time /home/kbk/PYSRC/python b$i.py >@out$i; cmp @out$i out$i; done b0.py 5.48s real 5.30s user 0.05s system b1.py 1.36s real 1.22s user 0.10s system b2.py 0.44s real 0.42s user 0.04s system b3.py 2.01s real 1.94s user 0.04s system b4.py 1.69s real 1.63s user 0.05s system b5.py 4.80s real 4.73s user 0.02s system b6.py 1.84s real 1.56s user 0.26s system I notice that some of these tests are a little faster on 2.3.3 while others are faster on 2.4, resulting in the overall time being about the same on both releases. N.B. compiling Python w/o the stack protector doesn't make a noticeable difference ;-) There may be some other problem with this box that I haven't yet discovered, but right now I'm blaming the tiny cache for performance being 2 - 3 x lower than expected from the clock rate, compared to what others are getting. -- KBK p.s. I saw quite a large outlier on 2.4 pystone when I first tried it. I didn't believe it, but was able to scroll back and clip it: Python 2.4a0 (#40, Jan 1 2004, 22:22:45) [GCC 2.95.3 20010125 (prerelease, propolice)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information.
This is 30% lower than the rate quoted above. I haven't been able to duplicate it. Maybe the OS or X was doing something which tied up the cache. This is a fairly lightly loaded machine running X, Ion, and emacs. I've also seen 20% variations in the 2.2.1 pystone benchmark. It seems to me that this benchmark is pretty cache sensitive and should be done on an unloaded system, preferable x/o X, and with the results averaged over many random trials if comparisions are desired, especially if the cache is small. I don't see the same variation in the parrotbench. It's just consistently low for this box.

Another small set of pystone and parrotbench results... Best of 7 runs on a standard windows build of Python on windows 2000: Processor P4 2.26ghz P4 Cel 1.7ghz P2 Cel 400mhz bus 533 mhz 400 mhz 66 mhz l2cache 512k 128k 128k memory 1G DualChan DDR333 256M DDR200 384M SDR66 2.2.2 Pystone ~25600 ~18500 ~6086 2.3.2 Pystone ~31800 ~6830 2.3.2 Parrot 16.06s 83.4s 2.3.2 Pys*Par 510708 569622 A few things to note: I don't have consistant access to the 1.7ghz celeron, which is why it didn't get any 2.3 tests. The p2 celeron is a dual processor system. There does not seem to be an easy way to give a process one processor affinity on the command line. However, even inserting a pause into pystone in order to alter processor affinity does not seem to appreciably alter the pystone speed. This may suggest that Python has a working set so large that it doesn't fit into the 128k of L2 cache on either of the celerons. This is supported by the fact that based on the python 2.2.2 pystone results, the 2.26 P4 performs like a 2.35 ghz P4 celeron (256/185*1.7), which could be due to the larger cache, higher speed memory, or more likely both. It is also likely that the P4s aren't as fast per clock as the P2 due to the heavy branch misprediction penalties that the architecture suffers from. Strange thing: The 2.26 ghz P4 gains more in the Pystone benchmark from the 2.2.2 -> 2.3.2 transition than the celeron 400, 24% and 12% increases in Pystones respectively. - Josiah

On Thu, Jan 01, 2004 at 12:59:08PM -0500, Brad Clements wrote:
yes, the pentium M is intel's fastest CPU though they don't like to admit it lest people stop buying P4s. current models have a 1mb L2 cache and an well optimized p3-derived core. cpu cache size will have an effect on any benchmark. if one python implementation outperforms another because of a better cache foot print thats good thing. cache architecture will only become more important as time goes on.

On Thursday 01 January 2004 07:22 am, Anthony Baxter wrote: ...
(Hm: should the default number of passes for pystone be raised? One and a half seconds seems rather a short time for a benchmark...)
An old rule of thumb, when benchmarking something on many machines that exhibit a wide range of performance, is to try to organize the benchmarks in terms, not of completing a set number of repetitions, but rather of running as many repetitions as feasible within a (more or less) set time. Since, at the end, you'll be reporting in terms of "<whatever>s per second", it should make no real difference BUT it makes it more practical to run the "same" benchmark on machines (or more generally implementations) spanning orders of magnitude in terms of the performance they exhibit. (If the check for "how long have I been running so far" is very costly, you'll of course do it only "once every N repetitions" rather than doing it at every single pass). Wanting to run pystone on pypy, but not wanting to alter it TOO much from the CPython original so as to keep the results easily comparable, I just made the number of loops an optional command line parameter of the pystone script (as checked in within the pypy project). Since the laptops we had around at the Amsterdam pypy sprint ran about 0.8 to 3 pystones/sec with pypy (a few thousand times slower than with CPython 2.3.*), running all of the 50,000 iterations was just not practical at all. Anyway, the ability of optionally passing in the number of iterations on the command line would also help with your opposite problem of too-fast machines -- if 50k loops just aren't enough for a reasonably-long run, you could use more. Alex

Yup. As a matter of historical detail, pystone used to have LOOPS set to 1000; in 1997 I changed it to 10K, and in 2002 I bumped it again to 50K. BTW, I'd gladly receive your patch for parameterizing LOOPS for inclusion into the standard Python library. --Guido van Rossum (home page: http://www.python.org/~guido/)

On Friday 02 January 2004 04:18 pm, Guido van Rossum wrote:
OK, I committed the modified pystone.py directly (I hope the change is small and "marginal" enough not to need formal review -- easy enough to back off if I've made some 'oops', anyway...). Alex

Thanks! Looks benign enough, even though my ideas about main() have evolved some. (http://www.artima.com/weblogs/viewpost.jsp?thread=4829) But that would be harder to fold into the existing pystone framework, so I'd say let's leave this alone. --Guido van Rossum (home page: http://www.python.org/~guido/)

Probably. Maybe you can try the new pystone from CVS which has a command line option. (If I were to write it over I'd use the strategy from timeit, which has a self-adjusting strategy to pick an appropriate number of loops.) I multiplied the two numbers (pystones/sec and parrotbench seconds) for your two runs and found the product to be much higher for your first run than for the second. This is suspicious (perhaps points at too much variation in the pystone timing); for contrast, Skip's two runs before and after rebooting give a product within 0.05 percent of each other). Hm, of course this could also have to do with Python versions. Let me try... Yes, it looks like for me, on one machine I have handy here, Python 2.3.3 gives a higher product than current CVS (though not by as much as you measured). So what does this product mean? It's higher if pystone is faster, or parrotbench is slower. Any factor that makes both faster (or slower) by the same amount has no effect. Ah (thinking aloud here) the unit is "pystones". Not pystones per second, just pystones. And this leads to an interpretation: it is how many pystone loops execute in the time needed to complete the parrotbench benchmark (so the unit is really "pystones per parrotbench"). So what makes a machine run more pystones in a parrotbench? I don't know enough about CPUs and caches to draw a conclusion, and I've got to start doing real work today... --Guido van Rossum (home page: http://www.python.org/~guido/)

At 3:56 PM -0800 12/31/03, Dennis Allison wrote:
Well, add in: real 1m20.412s user 1m1.580s sys 0m2.100s for this somewhat creaky 600MHz G3 iBook... -- Dan --------------------------------------"it's like this"------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk

I've been looking at user times only, but on that box the discrepancy between user and real time is enormous! It also suggests that a 600 MHz G3 and a 650 P3 are pretty close, and contrary to what Apple seems to claim, the G3's MHz rating isn't worth much more than a P3's MHz rating. Could you run pystone too? python -c 'from test.pystone import main; main(); main(); main()' and then report the smallest pystones/second value. --Guido van Rossum (home page: http://www.python.org/~guido/)

At 4:26 PM -0800 12/31/03, Guido van Rossum wrote:
Yeah, that's not uncommon. I'm not sure if there's a problem with the time command, or if it's something else. This is a laptop (currently on battery power, though with reduced performance turned off) and it's got a 100MHz main memory bus, which is definitely a limiting factor once code slips out of L1 cache. It's faster than my Gameboy, but some days I wonder how much... :)
Possibly. I'm not sure it's necessarily the best machine for that comparison, given the power/performance tradeoffs with laptops. I may give it a whirl on one of the G3 desktop machines around here to see how much bus speed matters. (I won't be surprised, given the likely working set for an interpreter, to find that bus speed makes more of a difference than CPU speed)
Smallest is: Pystone(1.1) time for 50000 passes = 6.61 This machine benchmarks at 7564.3 pystones/second -- Dan --------------------------------------"it's like this"------------------- Dan Sugalski even samurai dan@sidhe.org have teddy bears and even teddy bears get drunk

Thanks -- though I made a mistake -- I should've asked for the largest pystone value -- larger is better for pystone, unlike for running times. Anyway, this value is reasonable given your parrotbench time. --Guido van Rossum (home page: http://www.python.org/~guido/)

[Dennis Allison]
[Guido van Rossum]
I've been looking at user times only, but on that box the discrepancy between user and real time is enormous!
Some data from an older machine, 199 MhZ Pentium Pro, 128 MB, Fedora 1: $ time python2.3 -O 'b.py' >@out real 2m8.363s user 2m53.420s sys 0m1.980s $ python2.3 -c 'from test.pystone import main; main(); main(); main()' Pystone(1.1) time for 50000 passes = 17 This machine benchmarks at 2941.18 pystones/second (showing only the fastest) I know nothing about benchmarking, and I don't know whether anyone can use this data either, but it sure won't hurt anybody either ;) Happy 2004, Gerrit Holl. -- 274. If any one hire a skilled artizan, he shall pay as wages of the ... five gerahs, as wages of the potter five gerahs, of a tailor five gerahs, of ... gerahs, ... of a ropemaker four gerahs, of ... gerahs, of a mason ... gerahs per day. -- 1780 BC, Hammurabi, Code of Law -- Asperger's Syndrome - a personal approach: http://people.nl.linux.org/~gerrit/english/

Dennis> I suspect there are other folks who have run the pie-con Dennis> benchmarks on their machines. Perhaps we should construct a Dennis> chart. 800 MHz Ti PowerBook (G4), using Python from CVS, output of make (second run of two, to make sure pyo files were already generated): time python -O b.py >@out real 0m49.999s user 0m46.610s sys 0m1.030s cmp @out out The best-of-three pystone measurement is 10964.9 pystones/second. The Makefile should probably parameterize "python" like so: PYTHON = python time: time $(PYTHON) -O b.py >@out cmp @out out ... so people can specify precisely which version of Python to use. Skip

Skip Montanaro <skip@pobox.com> writes:
Bloody hell, that about what I get on my 600Mhz G3 iBook (same model as Dan's, sounds like). Does your TiBook have no cache or a *really* slow bus or something? Cheers, mwh -- I've reinvented the idea of variables and types as in a programming language, something I do on every project. -- Greg Ward, September 1998

>> 800 MHz Ti PowerBook (G4), using Python from CVS, output of make >> (second run of two, to make sure pyo files were already generated): >> >> time python -O b.py >@out >> >> real 0m49.999s >> user 0m46.610s >> sys 0m1.030s >> cmp @out out >> >> The best-of-three pystone measurement is 10964.9 pystones/second. Michael> Bloody hell, that about what I get on my 600Mhz G3 iBook (same Michael> model as Dan's, sounds like). Does your TiBook have no cache Michael> or a *really* slow bus or something? The Apple System Profiler says my bus speed is 133MHz. I have a 256K L2 cache and a 1MB L3 cache. The machine has 1GB of RAM. Some rather simple operations slow down considerably after the system's been up awhile (and I do tend to leave it up for days or weeks at a time). I don't recall how long it had been up when I ran those tests. I just ran pystone again - it's been up 2 days, 19 hrs at the moment - and got significantly better numbers: % python ~/src/python/head/dist/src/Lib/test/pystone.py Pystone(1.1) time for 50000 passes = 3.82 This machine benchmarks at 13089 pystones/second % python ~/src/python/head/dist/src/Lib/test/pystone.py Pystone(1.1) time for 50000 passes = 3.83 This machine benchmarks at 13054.8 pystones/second % python ~/src/python/head/dist/src/Lib/test/pystone.py Pystone(1.1) time for 50000 passes = 3.78 This machine benchmarks at 13227.5 pystones/second Rerunning the parrotbench code shows a decided improvement as well: % make time python -O b.py >@out real 0m40.018s user 0m38.620s sys 0m0.900s cmp @out out Skip

On Fri, Jan 02, 2004, Skip Montanaro wrote:
What version of OS X and developer tools? I'm a tiny bit surprised that you say your machine stays up for weeks at a time; that implies you don't install security updates. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

>> Some rather simple operations slow down considerably after the >> system's been up awhile (and I do tend to leave it up for days or >> weeks at a time). I don't recall how long it had been up when I ran >> those tests. I just ran pystone again - it's been up 2 days, 19 hrs >> at the moment - and got significantly better numbers: aahz> What version of OS X and developer tools? I'm a tiny bit aahz> surprised that you say your machine stays up for weeks at a time; aahz> that implies you don't install security updates. On the contrary. Apple doesn't release security updates all that often. I run Software Update daily. If the presence of new software updates was the only reason to reboot I'd almost never reboot. <wink> Just to make sure that was the case I just ran Software Update manually. Nothing is out-of-date (well, except for the fact that I haven't coughed up the dough for 10.3.x yet). I'm running an up-to-date version of 10.2.8. Skip

System is an AMD Athlon XP1600+ (1.4GHz) with 512MB PC2100 RAM, running FreeBSD 4.8. ---------------------------------------------------------------- $ python2.3 Python 2.3.3 (#4, Jan 4 2004, 12:39:32) [GCC 2.95.4 20020320 [FreeBSD]] on freebsd4 Type "help", "copyright", "credits" or "license" for more information.
^D
$ make time time python2.3 -O b.py >@out 21.19 real 20.88 user 0.26 sys cmp @out out $ python2.3 /usr/local/lib/python2.3/test/pystone.py Pystone(1.1) time for 50000 passes = 1.89062 This machine benchmarks at 26446.3 pystones/second -- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac@bullseye.apana.org.au (pref) | Snail: PO Box 370 andymac@pcug.org.au (alt) | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia
participants (14)
-
Aahz
-
Alex Martelli
-
Andrew MacIntyre
-
Anthony Baxter
-
Brad Clements
-
Dan Sugalski
-
Dennis Allison
-
Gerrit Holl
-
Gregory P. Smith
-
Guido van Rossum
-
Josiah Carlson
-
kbk@shore.net
-
Michael Hudson
-
Skip Montanaro