[Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)

Maciej Fijalkowski fijall at gmail.com
Fri Apr 8 08:16:09 CEST 2011


On Fri, Apr 8, 2011 at 3:29 AM, Jesse Noller <jnoller at gmail.com> wrote:
> On Thu, Apr 7, 2011 at 7:52 PM, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
>> On 08/04/2011 00:36, Anthony Scopatz wrote:
>>
>> On Thu, Apr 7, 2011 at 6:11 PM, Michael Foord <fuzzyman at voidspace.org.uk>
>> wrote:
>>>
>>> On 07/04/2011 22:41, Antoine Pitrou wrote:
>>>>
>>>> On Thu, 07 Apr 2011 17:32:24 -0400
>>>> Tres Seaver<tseaver at palladion.com>  wrote:
>>>>>>
>>>>>> Right now, we are talking about building "speed.python.org" to test
>>>>>> the speed of python interpreters, over time, and alongside one another
>>>>>> - cython *is not* an interpreter.
>>>>>>
>>>>>> Cython is out of scope for this.
>>>>>
>>>>> Why is it out of scope to use the benchmarks and test harness to answer
>>>>> questions like "can we use Cython to provide optional optimizations for
>>>>> the stdlib"?  I can certainly see value in havng an objective way to
>>>>> compare the macro benchmark performance of a Cython-optimized CPython
>>>>> vs. a vanilla CPython, as well as vs. PyPY, Jython, or IronPython.
>>>>
>>>> Agreed. Assuming someone wants to take care of the Cython side of
>>>> things, I don't think there's any reason to exclude it under the
>>>> dubious reason that it's "not an interpreter".
>>>> (would you exclude Psyco, if it was still alive?)
>>>>
>>>
>>> Well, sure - but within the scope of a GSOC project limiting it to "core
>>> python" seems like a more realistic goal.
>>>
>>> Adding cython later shouldn't be an issue if someone is willing to do the
>>> work.
>>
>> Jesse, I understand that we are talking about the benchmarks on
>> speed.pypy.org.  The current suite, and correct me if I
>> am wrong, is completely written in pure python so that any of the
>> 'interpreters' may run them.
>> My point, which I stand by, was that during the initial phase (where
>> benchmarks are defined) that the Cython crowd
>> should have a voice.  This should have an enriching effect on the whole
>> benchmarking task since they have
>> thought about this issue in a way that is largely orthogonal to the methods
>> PyPy developed.  I think it
>> would be a mistake to leave Cython out of the scoping study.
>>
>> Personally I think the Gsoc project should just take the pypy suite and run
>> with that - bikeshedding about what benchmarks to include is going to make
>> it hard to make progress. We can have fun with that discussion once we have
>> the infrastructure and *some* good benchmarks in place (and the pypy ones
>> are good ones).
>>
>> So I'm still with Jesse on this one. If there is any "discussion phase" as
>> part of the Gsoc project it should be very strictly bounded by time.
>>
>
> What michael said: My goal is is to get speed.pypy.org ported to be
> able to be used by $N interpreters, for $Y sets of performance
> numbers. I'm trying to constrain the problem, and the initial
> deployment so we don't spend the next year meandering about. It should
> be sufficient to port the benchmarks from speed.pypy.org, and any
> deltas from http://hg.python.org/benchmarks/ to Python 3 and the
> framework that runs the tests to start.
>
> I don't care if we eventually run cython, psyco, parrot, etc. But the
> focus at the language summit, and the continued focus of me getting
> the hardware via the PSF to host this on performance/speed.python.org
> is tightly focused on the pypy, ironpython, jython and cpython
> interpreters.
>
> Let's just get our basics done first before we go all crazy with adding stuff :)
>
> jesse

Hi.

Spending significant effort to make those benchmarks run on cython is
definitely out of scope. If cython can say compile twisted we can as
well run it (or skip few ones that are not compilable for some time).
If cython won't run every of the major benchmarks (using large
libraries) or cython people would complain all the time about adding
static type analysis (this won't happen) then the answer is cython is
not python enough.

The second part is porting to python3. I would like to postpone this
part (I already chatted with DasIch about it) until libraries are
ready. As of 8th of April 2010 none of the interesting benchmarks
would run or be easy to port to Python 3. by skipping all interesting
ones (each of them requiring large library), the outcome would be a
set of benchmarks that not that many people care about and also not a
very interesting one.

My proposal to steer this project would be to first have an
infrastructure running, improve the backend running the benchmarks,
make sure we build all the interpreters and have this running. Also
polish codespeed to look good for everyone. This is already *a lot* of
work, even though it might not look like it.

Cheers,
fijal


More information about the Python-Dev mailing list