On Tue, Jan 31, 2012 at 7:40 PM, Brett Cannon <
brett@python.org> wrote:
>
>
> On Tue, Jan 31, 2012 at 11:58, Paul Graydon <
paul@paulgraydon.co.uk> wrote:
>>
>>
>>> And this is a fundamental issue with tying benchmarks to real
>>> applications and libraries; if the code the benchmark relies on never
>>> changes to Python 3, then the benchmark is dead in the water. As Daniel
>>> pointed out, if spitfire simply never converts then either we need to
>>> convert them ourselves *just* for the benchmark (yuck), live w/o the
>>> benchmark (ok, but if this happens to a bunch of benchmarks then we are
>>> going to not have a lot of data), or we look at making new benchmarks based
>>> on apps/libraries that _have_ made the switch to Python 3 (which means
>>> trying to agree on some new set of benchmarks to add to the current set).
>>>
>>>
>> What is the criteria by which the original benchmark sets were chosen?
>> I'm assuming it was because they're generally popular libraries amongst
>> developers across a variety of purposes, so speed.pypy would show the speed
>> of regular tasks?
>
>
> That's the reason unladen swallow chose them, yes. PyPy then adopted them
> and added in the Twisted benchmarks.
>
>>
>> If so, presumably it shouldn't be too hard to find appropriate libraries
>> for Python 3?
>
>
> Perhaps, but someone has to put in the effort to find those benchmarks, code
> them up, show how they are a reasonable workload, and then get them
> accepted. Everyone likes the current set because the unladen team put in a
> lot of time and effort into selecting and creating those benchmarks.