Quick question about the hexiom2 benchmark: what does it measure? It is by far the slowest benchmark I ported, and considering it isn't a real-world app benchmark I want to make sure the slowness of it is worth it. Otherwise I would rather drop it since having something run 1/25 as many iterations compared to the other simple benchmarks seems to water down its robustness.
On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski <fijall@gmail.com>wrote:
On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon <brett@python.org> wrote:
So I managed to get the following benchmarks moved into the unladen repo (not pushed yet until I figure out some reasonable scaling values as some finish probably too fast and others go for a while):
chaos fannkuch meteor-contest (renamed meteor_contest) spectral-norm (renamed spectral_norm) telco bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this benchmark) go hexiom2 json_bench (renamed json_dump_v2) raytrace_simple (renamed raytrace)
Most of the porting was range/xrange related. After that is was str/unicode. I also stopped having the benchmarks write out files as it was always to verify results and not a core part of the benchmark.
That leaves us with the benchmarks that rely on third-party projects. The chameleon benchmark can probably be ported as chameleon has a version released running on Python 3. But django and html5lib have only in-development versions that support Python 3. If we want to pull in the tip of their repos then those benchmarks can also be ported now rather than later. People have opinions on in-dev code vs. released for benchmarking?
There is also the sphinx benchmark, but that requires getting CPython's docs building under Python 3 (see http://bugs.python.org/issue10224).
Speed mailing list Speed@python.org http://mail.python.org/mailman/listinfo/speed
great job!