On Tue, Dec 1, 2015 at 9:04 PM, Stewart, David C
On 12/1/15, 10:56 AM, "Maciej Fijalkowski"
wrote: Hi David.
Any reason you run a tiny tiny subset of benchmarks?
We could always run more. There are so many in the full set in https://hg.python.org/benchmarks/ with such divergent results that it seems hard to see the forest because there are so many trees. I'm more interested in gradually adding to the set rather than the huge blast of all of them in daily email. Would you disagree?
Part of the reason that I monitor ssbench so closely on Python 2 is that Swift is a major element in cloud computing (and OpenStack in particular) and has ~70% of its cycles in Python.
Last time I checked, Swift was quite a bit faster under pypy :-)
We are really interested in workloads which are representative of the way Python is used by a lot of people and which produce repeatable results. (and which are open source). Do you have a suggestions?
You know our benchmark suite (https://bitbucket.org/pypy/benchmarks), we're gradually incorporating what people report. That means that (Typically) it'll be open source library benchmarks, if they get to the point of writing some. I have for example coming django ORM benchmark, can show you if you want. I don't think there is a "representative benchmark" or maybe even "representative set", also because open source code tends to be higher quality and less spaghetti-like than closed source code that I've seen, but we're adding and adding. Cheers, fijal