[pypy-dev] Benchmarks
Maciej Fijalkowski
fijall at gmail.com
Sat Jul 16 22:44:06 CEST 2011
On Tue, Jul 12, 2011 at 1:20 AM, Maciej Fijalkowski <fijall at gmail.com> wrote:
> Hi
>
> I'm a bit worried with our current benchmarks state. We have around 4
> benchmarks that had reasonable slowdowns recently and we keep putting
> new features that speed up other things. How can we even say we have
> actually fixed the original issue? Can we have a policy of not merging
> new performance features before having a story why benchmarks got
> slower?
>
> Current list:
>
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=spectral-norm&env=tannit&revs=50
this fixed itself, recent runs are fast again (and anto could not
reproduce at all)
>
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=spitfire&env=tannit&revs=50
armin will have a look one day
>
> This is a good example why we should not work the way we work now:
>
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=slowspitfire&env=tannit&revs=200
do we say "meh" and go on?
>
> There was an issue, then the issue was fixed, but apparently not quite
> (7th of June is quite a bit slower than 25th of May) and then recently
> we introduced something that make it faster alltogether. Can we even
> fish the original issue?
>
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=bm_mako&env=tannit&revs=200
no clue so far?
>
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=nbody_modified&env=tannit&revs=50
> (is it relevant or just noise?)
just noise maybe?
>
> http://speed.pypy.org/timeline/?exe=1&base=none&ben=telco&env=tannit&revs=50
nobody looked
>
> Cheers,
> fijal
>
More information about the pypy-dev
mailing list