[pypy-dev] benchmarking input

Leonardo Santagada santagada at gmail.com
Mon Sep 28 04:04:26 CEST 2009


On Sep 27, 2009, at 12:47 PM, Armin Rigo wrote:

> Hi Leonardo,
>
> On Fri, Sep 25, 2009 at 06:52:44PM -0300, Leonardo Santagada wrote:
>> I want to know why PyPy doesn't use the unladen swallow benchmarks in
>> complement to the ones already there and maybe reuse and extend their
>> reporting tools. This could make comparing results easier and divide
>> the work of creating comprehensive benchmarks for python.
>
> A number of benchmarks are not applicable to us, or they are
> uninteresting at this point (e.g. pickling, regexp, or just
> microbenchmarks...).

Uninteresting for benchmarking the jit, but important for python users.

> That would leave 2 usable benchmarks, at a first glance: 'ai', and
> possibly 'spitfire/slowspitfire'.

The django one is also interesting.

> (Btw, I wonder why they think that richards is "too artificial" when
> they include a number of microbenchmarks that look far more artificial
> to me...)

I thought that too... maybe just adding richards is okay, they can  
discard the results if they want.

I think that talking to them and adding to their benchmarks. Maybe  
creating a python benchmark project on google to be moved together  
with the stdlib separation to python.org is a good idea to bring the  
community together. Using the same benchmark framework could help both  
pypy (they already process the benchmarks and do a form of reporting)  
and unladden swallow (probably all the benchmarks that pypy adds can  
show possible problems for their jit).

If you would like to try this course I could talk to the guys there so  
to make a separate project... maybe even start sharing stdlib tests  
like you talked about on pycon 09.

--
Leonardo Santagada
santagada at gmail.com






More information about the Pypy-dev mailing list