
Today we are releasing a beta of the upcoming PyPy 1.1 release. There are some Windows and OS X issues left that we would like to address between now and the final release but apart from this things should be working. We would appreciate feedback. The PyPy development team. ========================================== PyPy 1.1: Compatibility & Consolidation ========================================== Welcome to the PyPy 1.1 release - the first release after the end of EU funding. This release focuses on making PyPy's Python interpreter more compatible with CPython (currently CPython 2.5) and on making the interpreter more stable and bug-free. PyPy's Getting Started lives at: http://codespeak.net/pypy/dist/pypy/doc/getting-started.html Highlights of This Release ========================== - More of CPython's standard library extension modules are supported, among them ctypes, sqlite3, csv, and many more. Most of these modules extension are fully supported under Windows as well. http://codespeak.net/pypy/dist/pypy/doc/cpython_differences.html http://morepypy.blogspot.com/2008/06/pypy-improvements.html - Through a large number of tweaks, performance has been improved by 10%-50% since the 1.0 release. The Python interpreter is now between 0.8-2x (and in some corner case 3-4x) of the speed of CPython. A large part of these speed-ups come from our new generational garbage collectors. http://codespeak.net/pypy/dist/pypy/doc/garbage_collection.html - Our Python interpreter now supports distutils as well as easy_install for pure-Python modules. - We have tested PyPy with a number of third-party libraries. PyPy can run now: Django, Pylons, BitTorrent, Twisted, SymPy, Pyglet, Nevow, Pinax: http://morepypy.blogspot.com/2008/08/pypy-runs-unmodified-django-10-beta.htm... http://morepypy.blogspot.com/2008/07/pypys-python-runs-pinax-django.html http://morepypy.blogspot.com/2008/06/running-nevow-on-top-of-pypy.html - A buildbot was set up to run the various tests that PyPy is using nightly on Windows and Linux machines: http://codespeak.net:8099/ - Sandboxing support: It is possible to translate the Python interpreter in a special way so that the result is fully sandboxed. http://codespeak.net/pypy/dist/pypy/doc/sandbox.html http://blog.sandbox.lt/en/WSGI%20and%20PyPy%20sandbox Other Changes ============= - The ``clr`` module was greatly improved. This module is used to interface with .NET libraries when translating the Python interpreter to the CLI. http://codespeak.net/pypy/dist/pypy/doc/clr-module.html http://morepypy.blogspot.com/2008/01/pypynet-goes-windows-forms.html http://morepypy.blogspot.com/2008/01/improve-net-integration.html - Stackless improvements: PyPy's ``stackless`` module is now more complete. We added channel preferences which change details of the scheduling semantics. In addition, the pickling of tasklets has been improved to work in more cases. - Classic classes are enabled by default now. In addition, they have been greatly optimized and debugged: http://morepypy.blogspot.com/2007/12/faster-implementation-of-classic.html - PyPy's Python interpreter can be translated to Java bytecode now to produce a pypy-jvm. At the moment there is no integration with Java libraries yet, so this is not really useful. - We added cross-compilation machinery to our translation toolchain to make it possible to cross-compile our Python interpreter to Nokia's Maemo platform: http://codespeak.net/pypy/dist/pypy/doc/maemo.html - Some effort was spent to make the Python interpreter more memory-efficient. This includes the implementation of a mark-compact GC which uses less memory than other GCs during collection. Additionally there were various optimizations that make Python objects smaller, e.g. class instances are often only 50% of the size of CPython. http://morepypy.blogspot.com/2008/10/dsseldorf-sprint-report-days-1-3.html - The support for the trace hook in the Python interpreter was improved to be able to trace the execution of builtin functions and methods. With this, we implemented the ``_lsprof`` module, which is the core of the ``cProfile`` module. - A number of rarely used features of PyPy were removed since the previous release because they were unmaintained and/or buggy. Those are: The LLVM and the JS backends, the aspect-oriented programming features, the logic object space, the extension compiler and the first incarnation of the JIT generator. The new JIT generator is in active development, but not included in the release. http://codespeak.net/pipermail/pypy-dev/2009q2/005143.html http://morepypy.blogspot.com/2009/03/good-news-everyone.html http://morepypy.blogspot.com/2009/03/jit-bit-of-look-inside.html What is PyPy? ============= Technically, PyPy is both a Python interpreter implementation and an advanced compiler, or more precisely a framework for implementing dynamic languages and generating virtual machines for them. The framework allows for alternative frontends and for alternative backends, currently C, Java and .NET. For our main target "C", we can can "mix in" different garbage collectors and threading models, including micro-threads aka "Stackless". The inherent complexity that arises from this ambitious approach is mostly kept away from the Python interpreter implementation, our main frontend. Socially, PyPy is a collaborative effort of many individuals working together in a distributed and sprint-driven way since 2003. PyPy would not have gotten as far as it has without the coding, feedback and general support from numerous people. Have fun, the PyPy release team, [in alphabetical order] Amaury Forgeot d'Arc, Anders Hammerquist, Antonio Cuni, Armin Rigo, Carl Friedrich Bolz, Christian Tismer, Holger Krekel, Maciek Fijalkowski, Samuele Pedroni and many others: http://codespeak.net/pypy/dist/pypy/doc/contributor.html

söndagen den 19 april 2009 skrev Samuele Pedroni:
- Through a large number of tweaks, performance has been improved by 10%-50% since the 1.0 release. The Python interpreter is now between 0.8-2x (and in some corner case 3-4x) of the speed of CPython. A large part of these speed-ups come from our new generational garbage collectors.
I think this formulation is a bit confusing. It is not our speed that is 0.8-2x CPython, it is our performance relative to CPython that is between 0.8 and 2, with 0.8 meaning that we are faster than CPython on those benchmarks, and 2 meaning that we need twice the time to run the benchmark. Jacob

Jacob Hallén wrote:
söndagen den 19 april 2009 skrev Samuele Pedroni:
- Through a large number of tweaks, performance has been improved by 10%-50% since the 1.0 release. The Python interpreter is now between 0.8-2x (and in some corner case 3-4x) of the speed of CPython. A large part of these speed-ups come from our new generational garbage collectors.
I think this formulation is a bit confusing. It is not our speed that is 0.8-2x CPython, it is our performance relative to CPython that is between 0.8 and 2, with 0.8 meaning that we are faster than CPython on those benchmarks, and 2 meaning that we need twice the time to run the benchmark.
we noticed, and fixed it on the webpage and on the blog. final release announcement will go out in this form as well. Cheers, Carl Friedrich

On Apr 20, 2009, at 8:36 PM, Jacob Hallén wrote:
I think this formulation is a bit confusing. It is not our speed that is 0.8-2x CPython, it is our performance relative to CPython that is between 0.8 and 2, with 0.8 meaning that we are faster than CPython on those benchmarks, and 2 meaning that we need twice the time to run the benchmark.
Maybe I am a bit confused, but I don't see a difference between those two things? i.e., if the speed is 0.8x CPython, to me that means that it runs in 80% of CPython's time (i.e., faster), whereas 2x CPython would be twice as much time. In any case, I agree that the second formulation is phrased more clearly, just curious if my understanding of 0.8x is flawed. Niko

Niko Matsakis wrote:
On Apr 20, 2009, at 8:36 PM, Jacob Hallén wrote:
I think this formulation is a bit confusing. It is not our speed that is 0.8-2x CPython, it is our performance relative to CPython that is between 0.8 and 2, with 0.8 meaning that we are faster than CPython on those benchmarks, and 2 meaning that we need twice the time to run the benchmark.
Maybe I am a bit confused, but I don't see a difference between those two things?
i.e., if the speed is 0.8x CPython, to me that means that it runs in 80% of CPython's time (i.e., faster), whereas 2x CPython would be twice as much time.
In any case, I agree that the second formulation is phrased more clearly, just curious if my understanding of 0.8x is flawed.
Hi Niko, FWIW, that's the reasoning we had when we wrote the thing. However, we keep having this discussion every time we write something about performance, so I guess it's not as clear as you and me think :-). Carl Friedrich

Carl Friedrich Bolz wrote:
Niko Matsakis wrote:
On Apr 20, 2009, at 8:36 PM, Jacob Hallén wrote:
I think this formulation is a bit confusing. It is not our speed that is 0.8-2x CPython, it is our performance relative to CPython that is between 0.8 and 2, with 0.8 meaning that we are faster than CPython on those benchmarks, and 2 meaning that we need twice the time to run the benchmark. Maybe I am a bit confused, but I don't see a difference between those two things?
i.e., if the speed is 0.8x CPython, to me that means that it runs in 80% of CPython's time (i.e., faster), whereas 2x CPython would be twice as much time.
In any case, I agree that the second formulation is phrased more clearly, just curious if my understanding of 0.8x is flawed.
Hi Niko,
FWIW, that's the reasoning we had when we wrote the thing. However, we keep having this discussion every time we write something about performance, so I guess it's not as clear as you and me think :-).
Carl Friedrich
I've seen this discussion several times and not only in pypy related contexts. It seems that the majority of native English speakers (or possibly it's just American's?) interpret the statement in exactly the opposite way the majority of non-native speakers do. With "2x of the speed" being interpreted as two times faster by native speakers and as taking twice the amount of time by non-native speakers. Janzert

As a native English speaker, I should probably just jump in: Yes, "2x of the speed" would read to me as an awkward way of saying "2x the speed", hence "twice the speed", which means of course "twice as fast", not "twice as slow". The preposition "of" simply does not introduce some fraction. At the very least I just cannot hear any echo of it in my mind. I recommend reading Steven Pinker's The Stuff of Thought for more on how these quirks reveal how the brain actually works (not just native English speaker ones either). And I was confused by the original statement, thinking that PyPy had just started to deliver on its JIT, instead of the other story, which is really about compatibility. And that's quite cool. - Jim On Tue, Apr 21, 2009 at 8:49 AM, Janzert <janzert@janzert.com> wrote:
Carl Friedrich Bolz wrote:
Niko Matsakis wrote:
On Apr 20, 2009, at 8:36 PM, Jacob Hallén wrote:
I think this formulation is a bit confusing. It is not our speed that is 0.8-2x CPython, it is our performance relative to CPython that is between 0.8 and 2, with 0.8 meaning that we are faster than CPython on those benchmarks, and 2 meaning that we need twice the time to run the benchmark. Maybe I am a bit confused, but I don't see a difference between those two things?
i.e., if the speed is 0.8x CPython, to me that means that it runs in 80% of CPython's time (i.e., faster), whereas 2x CPython would be twice as much time.
In any case, I agree that the second formulation is phrased more clearly, just curious if my understanding of 0.8x is flawed.
Hi Niko,
FWIW, that's the reasoning we had when we wrote the thing. However, we keep having this discussion every time we write something about performance, so I guess it's not as clear as you and me think :-).
Carl Friedrich
I've seen this discussion several times and not only in pypy related contexts. It seems that the majority of native English speakers (or possibly it's just American's?) interpret the statement in exactly the opposite way the majority of non-native speakers do. With "2x of the speed" being interpreted as two times faster by native speakers and as taking twice the amount of time by non-native speakers.
Janzert
_______________________________________________ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
-- Jim Baker jbaker@zyasoft.com

On Tue, Apr 21, 2009 at 08:25, Niko Matsakis <niko@alum.mit.edu> wrote:
On Apr 20, 2009, at 8:36 PM, Jacob Hallén wrote:
I think this formulation is a bit confusing. It is not our speed that is 0.8-2x CPython, it is our performance relative to CPython that is between 0.8 and 2, with 0.8 meaning that we are faster than CPython on those benchmarks, and 2 meaning that we need twice the time to run the benchmark.
Maybe I am a bit confused, but I don't see a difference between those two things?
i.e., if the speed is 0.8x CPython, to me that means that it runs in 80% of CPython's time (i.e., faster), whereas 2x CPython would be twice as much time.
In any case, I agree that the second formulation is phrased more clearly, just curious if my understanding of 0.8x is flawed.
The problem is when you talk about _speed_. The term "performance" can be considered ambiguous, but speed is obviously the opposite (or better, is inversely proportional) of running time (and you were talking about the latter). If my _speed_ is twice as yours, I complete the same distance (or do the same things) in half the time, of course. Got to take another Physics class ;-D? Or to get some rest after preparing the release? (Just kidding obviously). Bye -- Paolo Giarrusso

Congrats on the release guys -- eagerly looking forward to 1.1. final!! PS. Any chance of updating unicodedata to v5.1 for the final? On Sun, Apr 19, 2009 at 10:46 PM, Samuele Pedroni <pedronis@openend.se> wrote:
Today we are releasing a beta of the upcoming PyPy 1.1 release. There are some Windows and OS X issues left that we would like to address between now and the final release but apart from this things should be working. We would appreciate feedback.
The PyPy development team.
========================================== PyPy 1.1: Compatibility & Consolidation ==========================================
Welcome to the PyPy 1.1 release - the first release after the end of EU funding. This release focuses on making PyPy's Python interpreter more compatible with CPython (currently CPython 2.5) and on making the interpreter more stable and bug-free.
PyPy's Getting Started lives at:
http://codespeak.net/pypy/dist/pypy/doc/getting-started.html
Highlights of This Release ==========================
- More of CPython's standard library extension modules are supported, among them ctypes, sqlite3, csv, and many more. Most of these modules extension are fully supported under Windows as well.
http://codespeak.net/pypy/dist/pypy/doc/cpython_differences.html http://morepypy.blogspot.com/2008/06/pypy-improvements.html
- Through a large number of tweaks, performance has been improved by 10%-50% since the 1.0 release. The Python interpreter is now between 0.8-2x (and in some corner case 3-4x) of the speed of CPython. A large part of these speed-ups come from our new generational garbage collectors.
http://codespeak.net/pypy/dist/pypy/doc/garbage_collection.html
- Our Python interpreter now supports distutils as well as easy_install for pure-Python modules.
- We have tested PyPy with a number of third-party libraries. PyPy can run now: Django, Pylons, BitTorrent, Twisted, SymPy, Pyglet, Nevow, Pinax:
http://morepypy.blogspot.com/2008/08/pypy-runs-unmodified-django-10-beta.htm... http://morepypy.blogspot.com/2008/07/pypys-python-runs-pinax-django.html http://morepypy.blogspot.com/2008/06/running-nevow-on-top-of-pypy.html
- A buildbot was set up to run the various tests that PyPy is using nightly on Windows and Linux machines:
- Sandboxing support: It is possible to translate the Python interpreter in a special way so that the result is fully sandboxed.
http://codespeak.net/pypy/dist/pypy/doc/sandbox.html http://blog.sandbox.lt/en/WSGI%20and%20PyPy%20sandbox
Other Changes =============
- The ``clr`` module was greatly improved. This module is used to interface with .NET libraries when translating the Python interpreter to the CLI.
http://codespeak.net/pypy/dist/pypy/doc/clr-module.html http://morepypy.blogspot.com/2008/01/pypynet-goes-windows-forms.html http://morepypy.blogspot.com/2008/01/improve-net-integration.html
- Stackless improvements: PyPy's ``stackless`` module is now more complete. We added channel preferences which change details of the scheduling semantics. In addition, the pickling of tasklets has been improved to work in more cases.
- Classic classes are enabled by default now. In addition, they have been greatly optimized and debugged:
http://morepypy.blogspot.com/2007/12/faster-implementation-of-classic.html
- PyPy's Python interpreter can be translated to Java bytecode now to produce a pypy-jvm. At the moment there is no integration with Java libraries yet, so this is not really useful.
- We added cross-compilation machinery to our translation toolchain to make it possible to cross-compile our Python interpreter to Nokia's Maemo platform:
http://codespeak.net/pypy/dist/pypy/doc/maemo.html
- Some effort was spent to make the Python interpreter more memory-efficient. This includes the implementation of a mark-compact GC which uses less memory than other GCs during collection. Additionally there were various optimizations that make Python objects smaller, e.g. class instances are often only 50% of the size of CPython.
http://morepypy.blogspot.com/2008/10/dsseldorf-sprint-report-days-1-3.html
- The support for the trace hook in the Python interpreter was improved to be able to trace the execution of builtin functions and methods. With this, we implemented the ``_lsprof`` module, which is the core of the ``cProfile`` module.
- A number of rarely used features of PyPy were removed since the previous release because they were unmaintained and/or buggy. Those are: The LLVM and the JS backends, the aspect-oriented programming features, the logic object space, the extension compiler and the first incarnation of the JIT generator. The new JIT generator is in active development, but not included in the release.
http://codespeak.net/pipermail/pypy-dev/2009q2/005143.html http://morepypy.blogspot.com/2009/03/good-news-everyone.html http://morepypy.blogspot.com/2009/03/jit-bit-of-look-inside.html
What is PyPy? =============
Technically, PyPy is both a Python interpreter implementation and an advanced compiler, or more precisely a framework for implementing dynamic languages and generating virtual machines for them.
The framework allows for alternative frontends and for alternative backends, currently C, Java and .NET. For our main target "C", we can can "mix in" different garbage collectors and threading models, including micro-threads aka "Stackless". The inherent complexity that arises from this ambitious approach is mostly kept away from the Python interpreter implementation, our main frontend.
Socially, PyPy is a collaborative effort of many individuals working together in a distributed and sprint-driven way since 2003. PyPy would not have gotten as far as it has without the coding, feedback and general support from numerous people.
Have fun,
the PyPy release team, [in alphabetical order]
Amaury Forgeot d'Arc, Anders Hammerquist, Antonio Cuni, Armin Rigo, Carl Friedrich Bolz, Christian Tismer, Holger Krekel, Maciek Fijalkowski, Samuele Pedroni
and many others: http://codespeak.net/pypy/dist/pypy/doc/contributor.html
_______________________________________________ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
-- love, tav plex:espians/tav | tav@espians.com | +44 (0) 7809 569 369 http://tav.espians.com | http://twitter.com/tav | skype:tavespian

On Tue, Apr 21, 2009 at 8:43 PM, tav <tav@espians.com> wrote:
Congrats on the release guys -- eagerly looking forward to 1.1. final!!
PS. Any chance of updating unicodedata to v5.1 for the final?
I think no, since we're following 2.5 language spec, which uses older unicode db. PS. First reasonable comment Cheers, fijal

Hi, On Wed, Apr 22, 2009 at 04:47, Maciej Fijalkowski <fijall@gmail.com> wrote:
On Tue, Apr 21, 2009 at 8:43 PM, tav <tav@espians.com> wrote:
PS. Any chance of updating unicodedata to v5.1 for the final?
I think no, since we're following 2.5 language spec, which uses older unicode db.
This could become a translation option, --objspace.std.unicodedb=4.1.0 -- Amaury Forgeot d'Arc
participants (10)
-
Amaury Forgeot d'Arc
-
Carl Friedrich Bolz
-
Jacob Hallén
-
Janzert
-
Jim Baker
-
Maciej Fijalkowski
-
Niko Matsakis
-
Paolo Giarrusso
-
Samuele Pedroni
-
tav