Release date for PyPy 2.0 beta 2?
Hi, I'm trying to make some performance comparisons using various tools like CPython, Cython, PyPy and Numba as described in an exercise I've put up here for a presentation (a tiny function generating digits of Pi): https://gist.github.com/deeplook/4947835 For this code PyPy 1.9 shows around 50 % of the performance of CPython. Christian Tismer tells me 2.0 beta 1 was much better, but I'm running into a bug for PyPy 2.0 beta 1 already described here two months ago: https://bugs.pypy.org/issue1350 So... is there any estimate for the release date of 2.0 beta 2? Thanks, Dinu
On Sat, Feb 16, 2013 at 11:48 PM, Dinu Gherman <gherman@darwin.in-berlin.de> wrote:
Hi,
I'm trying to make some performance comparisons using various tools like CPython, Cython, PyPy and Numba as described in an exercise I've put up here for a presentation (a tiny function generating digits of Pi): https://gist.github.com/deeplook/4947835 For this code PyPy 1.9 shows around 50 % of the performance of CPython.
Christian Tismer tells me 2.0 beta 1 was much better, but I'm running into a bug for PyPy 2.0 beta 1 already described here two months ago: https://bugs.pypy.org/issue1350
So... is there any estimate for the release date of 2.0 beta 2?
Thanks,
Dinu
_______________________________________________ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Hi Dinu, just run nightly http://buildbot.pypy.org/nightly/trunk/ ideally, also don't benchmark on OS X, it's a system that has lots of strange problems. For what is worth, you picked up a very terrible program - this program exercises long implementation, not "how fast you run python programs". A new pypy is ~30% slower than cpython, which we find reasonable (because it's runtime time), if you want faster pick gmpy. GMP however has no means of recovering from a MemoryError. How do you want to benchmark python compilers on this? Can anyone do something more sensible than just call operations on longs? Cheers, fijal
Hi Fijal, On Sat, Feb 16, 2013 at 11:39 PM, Maciej Fijalkowski <fijall@gmail.com> wrote:
How do you want to benchmark python compilers on this? Can anyone do something more sensible than just call operations on longs?
In theory it would be possible to queue up common sequences of operations, a bit like you did with numpy's lazy evaluation; e.g. an expression like "z = x * 3 + y" could be processed in only one walk through the digits. Just saying. This is very unlikely to give any performance gain unless the numbers are very large. A bientôt, Armin.
Maciej Fijalkowski:
Thanks.
For what is worth, you picked up a very terrible program - this program exercises long implementation, not "how fast you run python programs". A new pypy is ~30% slower than cpython, which we find reasonable (because it's runtime time), if you want faster pick gmpy. GMP however has no means of recovering from a MemoryError.
It was an example from a given context. Clearly, it doesn't show many different features to optimize.
How do you want to benchmark python compilers on this? Can anyone do something more sensible than just call operations on longs?
I compared it also with a version with serialized tuple assignments which shows an improvement of around 2.5 % on CPython, but no real change on PyPy, which is kind of what I hoped. Regards, Dinu
participants (3)
-
Armin Rigo
-
Dinu Gherman
-
Maciej Fijalkowski