Re: [Speed] standalone PyPy benchmarks ported
I just pushed the changes.
On Fri, Sep 14, 2012 at 4:19 PM, Brett Cannon <brett@python.org> wrote:
So I managed to get the following benchmarks moved into the unladen repo (not pushed yet until I figure out some reasonable scaling values as some finish probably too fast and others go for a while):
chaos fannkuch meteor-contest (renamed meteor_contest) spectral-norm (renamed spectral_norm) telco bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this benchmark) go hexiom2 json_bench (renamed json_dump_v2) raytrace_simple (renamed raytrace)
Most of the porting was range/xrange related. After that is was str/unicode. I also stopped having the benchmarks write out files as it was always to verify results and not a core part of the benchmark.
That leaves us with the benchmarks that rely on third-party projects. The chameleon benchmark can probably be ported as chameleon has a version released running on Python 3. But django and html5lib have only in-development versions that support Python 3. If we want to pull in the tip of their repos then those benchmarks can also be ported now rather than later. People have opinions on in-dev code vs. released for benchmarking?
There is also the sphinx benchmark, but that requires getting CPython's docs building under Python 3 (see http://bugs.python.org/issue10224).
participants (1)
-
Brett Cannon