Hi all, Some people learn about PyPy, and the first program they try to measure speed with is something like this: def factorial(n): res = 1 for i in range(1, n + 1): res *= i return res print factorial(25000) It may not be completely obvious a priori, but this is as bogus as it gets. This is by now only 50% slower in PyPy than in CPython thanks to efforts from various people. The issue is of course that it's an algo which, in CPython or in PyPy, spends most of its time in C code computing with rather large "long" objects. (No, PyPy doesn't contain magic to speed up C code 10 times.) In fact, this program spends more than 2/3rd of its time in the final repr() of the result! Converting a long to base 10 is a quadratic operation. Does it still make sense to add programs like this to our benchmarks? So far, our benchmarks are "real-life" examples. The benchmarks like above are completely missing the point of PyPy, as they don't stress at all the Python interpreter part. There are also other cases where PyPy's performance is very bad, like cpyext on an extension module with lots of small C API calls. I believe that it would still make sense to list such cases in the official benchmark, and have the descriptions of the benchmarks explain what's wrong with them. A bientôt, Armin.