> And this is a fundamental issue with tying benchmarks to real
> applications and libraries; if the code the benchmark relies on never
> changes to Python 3, then the benchmark is dead in the water. As
> Daniel pointed out, if spitfire simply never converts then either we
> need to convert them ourselves *just* for the benchmark (yuck), live
> w/o the benchmark (ok, but if this happens to a bunch of benchmarks
> then we are going to not have a lot of data), or we look at making new
> benchmarks based on apps/libraries that _have_ made the switch to
> Python 3 (which means trying to agree on some new set of benchmarks
> to add to the current set).
What is the criteria by which the original benchmark sets were chosen?
I'm assuming it was because they're generally popular libraries amongst
developers across a variety of purposes, so speed.pypy would show the
speed of regular tasks?
If so, presumably it shouldn't be too hard to find appropriate libraries
for Python 3?