Re: [Speed] What PyPy benchmarks are (un)important?
We could easily add the newer benchmarks to the front page, easy to do for the first plot. But the historical plot depends on there being data across all versions, so *that* geometric average would need to be done on the common set of 20 benchmarks for all PyPy versions, which means it will be different from the first average.
Alternatively, we can drop the older PyPy versions and start with the oldest one that has data for the full set of benchmarks.
Any other ideas?
Miquel
2012/9/13 Maciej Fijalkowski <fijall@gmail.com>:
On Thu, Sep 13, 2012 at 12:35 AM, Brett Cannon <brett@python.org> wrote:
I went through the list of benchmarks that PyPy has to see which ones could be ported to Python 3 now (others can be in the future but they depend on a project who has not released an official version with python 3 support):
ai chaos fannkuch float meteor-contest nbody_modified richards spectral-norm telco
bm_chameleon* bm_mako go hexiom2 json_bench pidigits pyflate-fast raytrace-simple sphinx*
The first grouping is the 20 shown on the speed.pypy.org homepage, the rest are in the complete list. Anything with an asterisk has an external dependency that is not already in the unladen benchmarks.
Are the twenty shown on the homepage of speed.pypy.org in some way special, or were they the first benchmarks that you were good/bad at, or what? Are there any benchmarks here that are particularly good or bad? I'm trying to prioritize what benchmarks I port so that if I hit a time crunch I got the critical ones moved first.
The 20 shown on the front page are the ones that we have full historical data, so we can compare. Others are simply newer.
I don't think there is any priority associated, we should probably put the others on the first page despite not having full data.
Speed mailing list Speed@python.org http://mail.python.org/mailman/listinfo/speed
participants (1)
-
Miquel Torres