[pypy-dev] New speed.pypy.org version
tobami at googlemail.com
Fri Jun 25 19:14:23 CEST 2010
the baseline problem you mention only happens with some benchmarks, so I
will risk the guess that the cpython results currently present are not from
the last one, and that in the case you point out (only for twisted_tcp) it
changed quite a bit. If the next results overwrite cpython's results, we'll
see if that has been the case.
2010/6/25 Maciej Fijalkowski <fijall at gmail.com>
> A bit more important problem - results seem to be messed up. I think
> there is something wrong with baselines. Look here:
> on twisted_tcp
> On Fri, Jun 25, 2010 at 5:08 AM, Miquel Torres <tobami at googlemail.com>
> > Hi all!,
> > I want to announce a new version of the benchmarks site speed.pypy.org.
> > After about 6 months, it finally shows the vision I had for such a
> > usefull for pypy developers but also for the general public following
> > or even other python implementation's development. On to the changes.
> > There are now three views: "Changes", "Timeline" and "Comparison":
> > The Overview was renamed to Changes, and its inline plot bars got removed
> > because you can get the exact same plot in the Comparison view now (and
> > some).
> > The Timeline got selectable baseline and "humanized" date labels for the
> > axis.
> > The new Comparison view allows, well, comparing of "competing"
> > which will also be of interest to the wider Python community (specially
> > we can add unladen, ironpython and JPython results).
> > Two examples of interesting comparisons are:
> > - relative bars
> > (http://speed.pypy.org/comparison/?bas=2%2B35&chart=relative+bars): here
> > see that the jit is faster than psyco in all cases except spambayes and
> > slowspitfire, were the jit cannot make up for pypy-c's abismal
> > Interestingly, in the only other case where the jit is slower than
> > the ai benchmark, psyco performs even worse.
> > - stacked bars
> > horizontal(
> > This is not meant to "demonstrate" that overall the jit is over two times
> > faster than cpython. It is just another way for a developer to picture
> > long a programme would take to complete if it were composed of 21 such
> > tasks. You can see that cpython's (the normalization chosen) benchmarks
> > take 1"relative" second. pypy-c needs more or less the same time, some
> > "tasks" being slower and some faster. Psyco shows an interesting picture:
> > From meteor-contest downwards (fortuitously) , all benchmarks are
> > "compressed", which means they are speeded up by psyco quite a lot. But
> > further speed up wouldn't make overall time much shorter because the
> > group of benchmarks now takes most of the time to complete. pypy-c-jit is
> > more extreme case of this: If the jit accelerated all "fast" benchmarks
> to 0
> > seconds (infinitely fast), it would only get about twice as fast as now
> > because ai, slowspitfire, spambayes and twisted_tcp now need half the
> > execution time. An good demonstration of "you are only as fast as your
> > slowest part". Of course the aggregate of all benchmarks is not a real
> > but it is still fun.
> > I hope you find the new version useful, and as always any feedback is
> > welcome.
> > Cheers!
> > Miquel
> > _______________________________________________
> > pypy-dev at codespeak.net
> > http://codespeak.net/mailman/listinfo/pypy-dev
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pypy-dev