[pypy-dev] Benchmarks

Antonio Cuni anto.cuni at gmail.com
Fri Jul 15 10:45:04 CEST 2011


On 12/07/11 01:20, Maciej Fijalkowski wrote:
> Hi
> 
> I'm a bit worried with our current benchmarks state. We have around 4
> benchmarks that had reasonable slowdowns recently
[cut]

Ok, let's try to make a summary of what we discovered about benchmark regressions.

> 
> Current list:
> 
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=spectral-norm&env=tannit&revs=50

this is weird. Maybe I did something wrong (like picking the wrong revisions
to test), but I cannot reproduce the slowdown.
This is on my machine, 32-bit chroot:

$ ./pypy-c-jit-45360-867e8ffff7a8-linux/bin/pypy
~/pypy/benchmarks/own/spectral-norm.py --take_geo_mean
0.0256132620823
 $ ./pypy-c-jit-45373-582113929b62-linux/bin/pypy
~/pypy/benchmarks/own/spectral-norm.py --take_geo_mean
0.0256120378632
$ ./pypy-c-jit-45412-3476b9be3cec-linux/bin/pypy
~/pypy/benchmarks/own/spectral-norm.py --take_geo_mean
0.025725552797

on tannit, I get similar results (a bit faster than on my machine, but all the
same).


> http://speed.pypy.org/timeline/?exe=1&base=none&ben=spitfire&env=tannit&revs=50

I think that armin investigated this, and the outcome was that it's because of
the changes we did in the GC during the sprint. Armin, do you confirm?
Do we have a solution?

> This is a good example why we should not work the way we work now:
> 
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=slowspitfire&env=tannit&revs=200
> 
> There was an issue, then the issue was fixed, but apparently not quite
> (7th of June is quite a bit slower than 25th of May) and then recently
> we introduced something that make it faster alltogether. Can we even
> fish the original issue?

did anybody look at this?

> http://speed.pypy.org/timeline/?exe=1&base=none&ben=bm_mako&env=tannit&revs=200

I investigated this. The culprit is 5b62f71347c8, in particular the change to
policy.py which enables tracing inside lltypesystem.rbuilder.

With this patch, mako is fast again:
http://paste.pocoo.org/show/439257/

The weird thing is that in jit-summary there is no obvious difference between
the runs, apart the total time:
http://paste.pocoo.org/show/439263/

Alex, do you feel like investigating more?

> http://speed.pypy.org/timeline/?exe=1&base=none&ben=nbody_modified&env=tannit&revs=50
> (is it relevant or just noise?)
> 
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=telco&env=tannit&revs=50

I did not look into these two. Anybody?

ciao,
Anto


More information about the pypy-dev mailing list