Re: [Speed] standalone PyPy benchmarks ported
On Mon, Sep 17, 2012 at 11:36 AM, Maciej Fijalkowski <fijall@gmail.com>wrote:
On Mon, Sep 17, 2012 at 5:00 PM, Brett Cannon <brett@python.org> wrote:
On Sun, Sep 16, 2012 at 10:54 AM, Maciej Fijalkowski <fijall@gmail.com> wrote:
On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon <brett@python.org> wrote:
Quick question about the hexiom2 benchmark: what does it measure? It
is
by far the slowest benchmark I ported, and considering it isn't a real-world app benchmark I want to make sure the slowness of it is worth it. Otherwise I would rather drop it since having something run 1/25 as many iterations compared to the other simple benchmarks seems to water down its robustness.
It's a puzzle solver. It got included because PyPy 1.9 got slower than 1.8 on this particular benchmark that people were actually running somewhere, so it has *some* value.
Fair enough. Just wanted to make sure that it was worth having a slow execution over.
I wonder, does adding a fixed random number seed help the distribution?
Fix how? hexiom2 doesn't use a random value for anything.
Ok, then please explain why having 1/25th of iterations kill robustness?
Less iterations to help smooth out any bumps in the measurements. E.g 4 iterations compared to 100 doesn't lead to as even of a measurement. I mean you would hope because the benchmark goes for so long that it would just level out within a single run instead of needing multiple runs to get the same evening out.
-Brett
-Brett
On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski <fijall@gmail.com
wrote:
On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon <brett@python.org> wrote:
So I managed to get the following benchmarks moved into the unladen repo (not pushed yet until I figure out some reasonable scaling values
some finish probably too fast and others go for a while):
chaos fannkuch meteor-contest (renamed meteor_contest) spectral-norm (renamed spectral_norm) telco bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this benchmark) go hexiom2 json_bench (renamed json_dump_v2) raytrace_simple (renamed raytrace)
Most of the porting was range/xrange related. After that is was str/unicode. I also stopped having the benchmarks write out files as it was always to verify results and not a core part of the benchmark.
That leaves us with the benchmarks that rely on third-party
as projects.
The chameleon benchmark can probably be ported as chameleon has a version released running on Python 3. But django and html5lib have only in-development versions that support Python 3. If we want to pull in the tip of their repos then those benchmarks can also be ported now rather than later. People have opinions on in-dev code vs. released for benchmarking?
There is also the sphinx benchmark, but that requires getting CPython's docs building under Python 3 (see http://bugs.python.org/issue10224).
Speed mailing list Speed@python.org http://mail.python.org/mailman/listinfo/speed
great job!
participants (1)
-
Brett Cannon