
Congratulations everyone one the release! It looks really good! So what's the next priority? Speed or more customisability (or both!)? Cheers, Ben

Hi Ben, hi all, On Tue, Aug 30, 2005 at 10:31 +0100, Ben.Young@risk.sungard.com wrote:
Congratulations everyone one the release! It looks really good!
thanks, also for your constant support!
So what's the next priority? Speed or more customisability (or both!)?
we had a brief discussion at the end of the sprint and apart from working on the bytecode compiler (which makes the interactive speed appear so slow) we intend to cleanup translation driving and various other areas before heading off to the next phases of the project. Also we currently plan the next sprint in Paris (10th-17th October) which we should announce soon. It's quite likely we are discussing/starting on the next efforts there regarding JIT compilation and massive multithreading and what not. There also is the ongoing effort of integrating Carl Friedrich's GC code into the actual translated PyPy and improving flexilibity around threading, completing some crucial external functions (like os.listdir) and whatnot. Personally, i hope i will find some time to seriously improve the testing framework on various levels. With PyPy, we begin to have lots of options and variants in testing our own code base, the standard python library's tests as well as testing translation targets and variants. I'd like to implement an approach that allows completely peer-driven testing and sending of reports to a central site where they can be queried according to os/processor/python. I intend to implement this in a PyPy neutral manner so that the numerous other users of py.test can reuse our efforts for their projects. Additionally, i'd like to have tests become interactively distributable to multiple machines (listed via ssh-account login information) from a single (possibly modified) working copy. Also, for the EU side of things some of us will need to invest time into reporting and writing papers. We intend to keep as much of that work reusable on the website as we have no inclination to just produce dead paper. Last but not least we are still looking for sprint places end of this and the whole next year. There appear to be possibilities in Istanbul (Turkey), Bern (Switzerland) and Romania but none of these are concrete at this point. It would also already be good to know if there is interest in doing a PyPy sprint at Pycon US in the next year. cheers, holger

Thanks for the reply Holger hpk@trillke.net (holger krekel) wrote on 30/08/2005 10:56:01:
Hi Ben, hi all,
On Tue, Aug 30, 2005 at 10:31 +0100, Ben.Young@risk.sungard.com wrote:
Congratulations everyone one the release! It looks really good!
thanks, also for your constant support!
So what's the next priority? Speed or more customisability (or both!)?
we had a brief discussion at the end of the sprint and apart from working on the bytecode compiler (which makes the interactive speed appear so slow) we intend to cleanup translation driving and various other areas before heading off to the next phases of the project. Also we currently plan the next sprint in Paris (10th-17th October) which we should announce soon. It's quite likely we are discussing/starting on the next efforts there regarding JIT compilation and massive multithreading and what not.
There also is the ongoing effort of integrating Carl Friedrich's GC code into the actual translated PyPy and improving flexilibity around threading, completing some crucial external functions (like os.listdir) and whatnot.
Will there eventually be a way for existing c extension modules to talk to the generated pypy? Or will people have to reimplement their extensions (perhaps using a c-types style notation). I guess the hard bit is making it cross-backend compatiable (for instance the way ironpython/jython can both automatically see the platform objects)
Personally, i hope i will find some time to seriously improve the testing framework on various levels. With PyPy, we begin to have lots of options and variants in testing our own code base, the standard python library's tests as well as testing translation targets and variants. I'd like to implement an approach that allows completely peer-driven testing and sending of reports to a central site where they can be queried according to os/processor/python. I intend to implement this in a PyPy neutral manner so that the numerous other users of py.test can reuse our efforts for their projects. Additionally, i'd like to have tests become interactively distributable to multiple machines (listed via ssh-account login information) from a single (possibly modified) working copy.
Have you come up with any solutions to make the annotation/translation process a bit less fragile, as it appears a small fix somewhere in the code can accidently produce huge amounts of confusion in the annotator. Perhaps some "checkpoints" in places in the code, where if an object doesn't have a particular annotation then we stop at that point?
Also, for the EU side of things some of us will need to invest time into reporting and writing papers. We intend to keep as much of that work reusable on the website as we have no inclination to just produce dead paper.
Last but not least we are still looking for sprint places end of this and the whole next year. There appear to be possibilities in Istanbul (Turkey), Bern (Switzerland) and Romania but none of these are concrete at this point. It would also already be good to know if there is interest in doing a PyPy sprint at Pycon US in the next year.
Thanks for your patience in my incessant questioning! Cheers, Ben
cheers,
holger

holger krekel:
Personally, i hope i will find some time to seriously improve the testing framework on various levels. With PyPy, we begin to have lots of options and variants in testing our own code base, the standard python library's tests as well as testing translation targets and variants.
Being a fan of testing I'd like to suggest conducting some compara- tive tests between CPython and PyPy, as well. At least I find stuff like the following pretty "interesting". It's about using re for splitting strings at very large substrings of some minimum length (something I just used for processing AIFF audio files, the code here is slightly simpler): Python 2.4 (#1, Feb 7 2005, 21:41:21) [GCC 3.3 20030304 (Apple Computer, Inc. build 1640)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> >>> import re >>> n = 'o' >>> l = int(1e5) >>> inp = "012" + n*l + "abc" + n*l + "xyz" >>> exp = ["012", "abc", "xyz"] >>> res = re.split(n+'{%d,%d}'%(l, l), inp) >>> exp == res False vs. PyPy 0.7.0 in StdObjSpace on top of Python 2.4 (startupttime: 7.99 secs) >>> >>>> import re >>>> n = 'o' >>>> l = int(1e5) >>>> inp = "012" + n*l + "abc" + n*l + "xyz" >>>> exp = ["012", "abc", "xyz"] >>>> res = re.split(n+'{%d,%d}'%(l, l), inp) >>>> exp == res True There could be workarounds for this particular case, but the point is that PyPy can be "correct" in places where CPython is not (here prob- ably because of limitations of the re machinery). And because they'd fail you would not expect to find such test cases in the "normal" test suites... In a way it's like saying "Look ma, I might be still bloody slow, but eventually I'm getting to the right place!" Dinu

On Tue, 30 Aug 2005 14:45:25 +0200 Dinu Gherman <gherman@darwin.in-berlin.de> wrote:
Being a fan of testing I'd like to suggest conducting some compara- tive tests between CPython and PyPy, as well. At least I find stuff like the following pretty "interesting". It's about using re for splitting strings at very large substrings of some minimum length (something I just used for processing AIFF audio files, the code here is slightly simpler):
Python 2.4 (#1, Feb 7 2005, 21:41:21) [GCC 3.3 20030304 (Apple Computer, Inc. build 1640)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> >>> import re >>> n = 'o' >>> l = int(1e5) >>> inp = "012" + n*l + "abc" + n*l + "xyz" >>> exp = ["012", "abc", "xyz"] >>> res = re.split(n+'{%d,%d}'%(l, l), inp) >>> exp == res False
Dinu, that scared me deeply! So I stopped everything and tryied it. Python 2.3.5 (#1, Aug 11 2005, 10:10:19) [GCC 3.3.5 (Debian 1:3.3.5-8ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information.
import re n = 'o' l = int(1e5) inp = "012" + n*l + "abc" + n*l + "xyz" exp = ["012", "abc", "xyz"] res = re.split(n+'{%d,%d}'%(l, l), inp) exp == res False
Python 2.4.1 (#2, Mar 30 2005, 21:51:10) [GCC 3.3.5 (Debian 1:3.3.5-8ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information.
import re n = 'o' l = int(1e5) inp = "012" + n*l + "abc" + n*l + "xyz" exp = ["012", "abc", "xyz"] res = re.split(n+'{%d,%d}'%(l, l), inp) exp == res True
So, this seems to be a bug fixed in CPython after 2.4
There could be workarounds for this particular case, but the point is that PyPy can be "correct" in places where CPython is not (here prob- ably because of limitations of the re machinery). And because they'd fail you would not expect to find such test cases in the "normal" test suites...
I'm a *big fan* of the pypy-team and pypy-itself. But I do not think __this particular case__ is fair enough to advertise PyPy getting it wright where CPython got it wrong. best regards, Rod Senra rsenra _at_ acm.org

Rodrigo Dias Arruda Senra:
So, this seems to be a bug fixed in CPython after 2.4
Phew - good news! ;-)
I'm a *big fan* of the pypy-team and pypy-itself. But I do not think __this particular case__ is fair enough to advertise PyPy getting it wright where CPython got it wrong.
Quite true! In fact, both should implement the same language! And from that point of view it sounds strange to suggest com- parative tests, except maybe for finding implementation buglets like this one, maybe especially in the standard library... Regards, Dinu

Dinu Gherman <gherman@darwin.in-berlin.de> writes:
Rodrigo Dias Arruda Senra:
So, this seems to be a bug fixed in CPython after 2.4
Phew - good news! ;-)
I'm a *big fan* of the pypy-team and pypy-itself. But I do not think __this particular case__ is fair enough to advertise PyPy getting it wright where CPython got it wrong.
Quite true! In fact, both should implement the same language! And from that point of view it sounds strange to suggest com- parative tests, except maybe for finding implementation buglets like this one, maybe especially in the standard library...
Well, as an American citizen I hope that the EU tells the MPAA and RIAA to shove it where the Sun don't shine. Actually they already did. Only first they bent over and dropped
Implementing PyPy has found more than one strange bug in the CPython implementation, to be sure... Cheers, mwh -- their trousers. -- Shmuel (Seymour J.) Metz & Toni Lassila, asr

On 8/30/05, holger krekel <hpk@trillke.net> wrote:
Personally, i hope i will find some time to seriously improve the testing framework on various levels. With PyPy, we begin to have lots of options and variants in testing our own code base, the standard python library's tests as well as testing translation targets and variants. I'd like to implement an approach that allows completely peer-driven testing and sending of reports to a central site where they can be queried according to os/processor/python. I intend to implement this in a PyPy neutral manner so that the numerous other users of py.test can reuse our efforts for their projects. Additionally, i'd like to have tests become interactively distributable to multiple machines (listed via ssh-account login information) from a single (possibly modified) working copy.
This reminds me of BuildBot: http://buildbot.sourceforge.net/ . Does it look relevant? Seo Sanghyeon

Hey Seo! On Wed, Aug 31, 2005 at 10:49 +0900, Sanghyeon Seo wrote:
On 8/30/05, holger krekel <hpk@trillke.net> wrote:
Personally, i hope i will find some time to seriously improve the testing framework on various levels. With PyPy, we begin to have lots of options and variants in testing our own code base, the standard python library's tests as well as testing translation targets and variants. I'd like to implement an approach that allows completely peer-driven testing and sending of reports to a central site where they can be queried according to os/processor/python. I intend to implement this in a PyPy neutral manner so that the numerous other users of py.test can reuse our efforts for their projects. Additionally, i'd like to have tests become interactively distributable to multiple machines (listed via ssh-account login information) from a single (possibly modified) working copy.
This reminds me of BuildBot: http://buildbot.sourceforge.net/ . Does it look relevant?
I know of buildbot but i think it has a different focus. It works with a central installation and it targets more general build processes whereas we would probably focus on detailed python testing and have it peer-driven so that everyone can contribute to gather information (which does obviously not exclude having servers which do it on a regular basis via cron or are triggered by svn-notification emails). cheers, holger

On Wed, Aug 31, 2005 at 08:14:42AM +0200, holger krekel wrote:
On Wed, Aug 31, 2005 at 10:49 +0900, Sanghyeon Seo wrote:
On 8/30/05, holger krekel <hpk@trillke.net> wrote:
Personally, i hope i will find some time to seriously improve the testing framework on various levels. With PyPy, we begin to have lots of options and variants in testing our own code base, the standard python library's tests as well as testing translation targets and variants. I'd like to implement an approach that allows completely peer-driven testing and sending of reports to a central site where they can be queried according to os/processor/python. I intend to implement this in a PyPy neutral manner so that the numerous other users of py.test can reuse our efforts for their projects.
Additionally, i'd like to have tests become interactively distributable to multiple machines (listed via ssh-account login information) from a single (possibly modified) working copy.
Yay! I've been hacking on something like this recently.
This reminds me of BuildBot: http://buildbot.sourceforge.net/ . Does it look relevant?
I know of buildbot but i think it has a different focus. It works with a central installation and it targets more general build processes whereas we would probably focus on detailed python testing and have it peer-driven so that everyone can contribute to gather information (which does obviously not exclude having servers which do it on a regular basis via cron or are triggered by svn-notification emails).
Buildbot is a nice tool if you want to run a test suite automatically (perhaps on several machines with different hardware/software configurations) after every svn checkin. There's one master server that collects and displays results, watches for checkins and tells slaves to start the build when something changes. Anyone can run a buildbot slave on their own machine, if you give them a username and password for connecting to the master. Or do you want something that is more ad-hoc (e.g. a developer downloads pypy, runs the test suite, and sends the test log by email)? Marius Gedminas -- I code in vi because I don't want to learn another OS. :) -- Robert Love
participants (7)
-
Ben.Young@risk.sungard.com
-
Dinu Gherman
-
hpk@trillke.net
-
Marius Gedminas
-
Michael Hudson
-
Rodrigo Dias Arruda Senra
-
Sanghyeon Seo