Pypy custom interpreter JIT question
Hello PyPy team, So I became interested in the translation toolchain plus JIT compiler generator, and in attempt to learn more about it, I set out to write a very simple interpreter for the language BF (brainf***). Simple enough, 8 opcodes, each with no arguments, and most have a one line implementation. Also plenty of examples out there to run, like a mandelbrot generator =) So I wrote up an interpreter for it in RPython. This worked great until I tried to enable the JIT option, at which point it would produce incorrect results. Strange, I thought I may have been using the hints incorrectly, I couldn't find too many details on exactly what the red and green vars should be. Here's the strange part though: I finally fixed it by changing an implementation detail that shouldn't have changed semantics at all. My implementation creates an instance of a Tape object which has two attributes: a list of integers representing the state of the machine, and a single integer identifying the current active cell on the tape. The implementation of each opcode was a method of this class, and the state of the program (what I passed as the "red" variable) was the instance of this class. After I manually factored the functionality of the class directly into the main dispatch loop and got rid of the class entirely, the JIT compiler started producing correct results. Can anyone help me figure out why my first attempt didn't work? Do red variables that are in class instances need to be handled different somehow? Here's the initial version that runs incorrectly when translated with JIT: https://bitbucket.org/brownan/bf-interpreter/src/c4679b354313/targetbf.py Here's the modified version that seems to work just fine: https://bitbucket.org/brownan/bf-interpreter/src/8095853278e9/targetbf.py In particular, note the elimination of the Tape object in the second version, and the differences in the mainloop function as well as the differences in the "red" variables. I've also included a few example BF programs if someone wants to try it out. The hanoi example crashes almost immediately with the first version translated with JIT. By the way, I've been translating it with the latest version of PyPy off of bitbucket. (latest as of a few weeks ago, that is) Thanks and great work on this project! -Andrew Brown
Hi Andrew, On Tue, Mar 22, 2011 at 8:44 PM, Andrew Brown <brownan@gmail.com> wrote:
https://bitbucket.org/brownan/bf-interpreter/src/c4679b354313/targetbf.py
can_enter_jit() is not correct. For it to work, it must be called just before jit_merge_point(). It's wrong that there are two intermediate instructions here: "pc+=1" and the "pc < len(program)" condition. As a first attempt, you should just not call can_enter_jit() at all. Nowadays, if can_enter_jit is never called, it's done automatically for you; moreover, a misplaced can_enter_jit can give nonsensical results, as opposed to many other hints, which cannot give a result worst than terribly bad performance (like the greens/reds variable separation --- which seems correct in your example). A bientôt, Armin.
On Tue, Mar 22, 2011 at 5:54 PM, Armin Rigo <arigo@tunes.org> wrote:
can_enter_jit() is not correct. For it to work, it must be called just before jit_merge_point(). It's wrong that there are two intermediate instructions here: "pc+=1" and the "pc < len(program)" condition.
Okay, I think I understand. I'm still learning how all this stuff works. Regardless...
As a first attempt, you should just not call can_enter_jit() at all. Nowadays, if can_enter_jit is never called, it's done automatically for you;
I did not know this. Good to know! I've removed the can_enter_jit() call from the two versions of my interpreter. However, the version that didn't work before still does not run correctly. It seems like I'm still left with the same problem as before. This works (version without Tape class and with can_enter_jit call removed) https://bitbucket.org/brownan/bf-interpreter/src/6c6c80397554/targetbf.py Incidentally, I think it may run slightly slower now, but I'm not sure. This version still does not (version *with* Tape class, and with can_enter_jit removed) https://bitbucket.org/brownan/bf-interpreter/src/1d16c3eed7e2/targetbf.py Any other ideas? I'm still at a loss. Thanks for taking a look! -Andrew
Hi Andrew, On Wed, Mar 23, 2011 at 8:27 PM, Andrew Brown <brownan@gmail.com> wrote:
However, the version that didn't work before still does not run correctly. It seems like I'm still left with the same problem as before.
Ah. Looking more closely, it turns out to be a bug in the optimization step of the JIT which just never showed up so far :-/ Working on it, by cleaning up optimizeopt/heap.py... Armin
Thanks, it does indeed work now! -Andrew On Thu, Mar 24, 2011 at 12:56 PM, Armin Rigo <arigo@tunes.org> wrote:
Re-hi,
On Thu, Mar 24, 2011 at 5:11 PM, Armin Rigo <arigo@tunes.org> wrote:
Working on it, by cleaning up optimizeopt/heap.py...
Done, at least as far as fixing the bug is concerned. Now your original version (with can_enter_jit removed) works.
Armin
Hi Andrew, On Fri, Mar 25, 2011 at 5:47 PM, Andrew Brown <brownan@gmail.com> wrote:
Thanks, it does indeed work now!
The next step is to have a look at the traces produced (run with PYPYLOG=jit-log-opt:logfile), and spot the obvious missing optimizations. The biggest issue seems to be the fact that the dictionary 'bracket_map' is green, but it is not enough to ensure that it is a constant dict (it could be mutated behind the JIT's back); so in the end, every trace contains reads from it. You could fix it by moving the line newpc = bracket_map[pc] to a new function to which you apply the decorator @pypy.rlib.jit.pure_function. A bientôt, Armin.
I tried that logging option once, but I didn't know how to read the logs. They're not exactly self explanatory. Is there a resource somewhere that explains how to read those logs? Regardless, I've implemented your suggestion and moved reads from that dictionary to a function decorated with @purefunction. Indeed, performance is greatly improved! Thanks! Current version: https://bitbucket.org/brownan/bf-interpreter/src/d3394345272e/targetbf.py A few questions: When the optimizer encounters a "pure" function, it must compare the objects passed in to previous invocations... does it consider the contents of container or other mutatible objects? or just the object identity, to be part of the function's input? It looks like, from logs of my new version, it's not reading from the dictionary at all during the trace, so I would guess it's not considering the actual contents of the dictionary as part of the function's input. This isn't surprising, but I just want to know for sure. Second, I noticed in jit.py the function hint() which has a parameter: "promote - promote the argument from a variable into a constant". Could this be an appropriate alternate to the @purefunction solution? Or, I'm guessing, does it just mean the name bracket_map won't change bindings, but does not impose a restriction on mutating the dictionary? -Andrew On Fri, Mar 25, 2011 at 2:18 PM, Armin Rigo <arigo@tunes.org> wrote:
Hi Andrew,
On Fri, Mar 25, 2011 at 5:47 PM, Andrew Brown <brownan@gmail.com> wrote:
Thanks, it does indeed work now!
The next step is to have a look at the traces produced (run with PYPYLOG=jit-log-opt:logfile), and spot the obvious missing optimizations. The biggest issue seems to be the fact that the dictionary 'bracket_map' is green, but it is not enough to ensure that it is a constant dict (it could be mutated behind the JIT's back); so in the end, every trace contains reads from it. You could fix it by moving the line
newpc = bracket_map[pc]
to a new function to which you apply the decorator @pypy.rlib.jit.pure_function.
A bientôt,
Armin.
On 03/28/2011 07:21 PM, Andrew Brown wrote:
I tried that logging option once, but I didn't know how to read the logs. They're not exactly self explanatory. Is there a resource somewhere that explains how to read those logs?
Not really, no :-(
Regardless, I've implemented your suggestion and moved reads from that dictionary to a function decorated with @purefunction. Indeed, performance is greatly improved! Thanks!
Current version: https://bitbucket.org/brownan/bf-interpreter/src/d3394345272e/targetbf.py
A few questions:
When the optimizer encounters a "pure" function, it must compare the objects passed in to previous invocations... does it consider the contents of container or other mutatible objects? or just the object identity, to be part of the function's input?
Just the object's identity.
It looks like, from logs of my new version, it's not reading from the dictionary at all during the trace, so I would guess it's not considering the actual contents of the dictionary as part of the function's input. This isn't surprising, but I just want to know for sure.
Second, I noticed in jit.py the function hint() which has a parameter: "promote - promote the argument from a variable into a constant". Could this be an appropriate alternate to the @purefunction solution? Or, I'm guessing, does it just mean the name bracket_map won't change bindings, but does not impose a restriction on mutating the dictionary?
If you are interested, this blog series explains the usage of hints: http://bit.ly/bundles/cfbolz/1 The logs there are a bit niceified though. Carl Friedrich
Hi Andrew, On Mon, Mar 28, 2011 at 7:21 PM, Andrew Brown <brownan@gmail.com> wrote:
When the optimizer encounters a "pure" function, it must compare the objects "promote - promote the argument from a variable into a constant". Could this be an appropriate alternate to the @purefunction solution? Or, I'm guessing, does it just mean the name bracket_map won't change bindings, but does not impose a restriction on mutating the dictionary?
One point of view on 'promote' is to mean "this variable was red, but now turn it green (i.e. make it constant)". It has no effect on a variable that is already green (= a constant). We have no support for considering that a dict is immutable, so it needs to be done with @purefunction. But to be effective, @purefunction must receive constant arguments; so in one or two places in the source code of PyPy you will find a construction like: x = hint(x, promote=True) # turn x into a constant some_pure_function(x) # call this pure function on x Indeed, Carl Friedrich's blog series explains it nicely, but it should also mention that when the hints described in the blog are applied not to integer but to pointers, they apply only to the pointers themselves, not on the fields of the objects they point to. A bientôt, Armin.
Thanks for the info! That's all the questions I have, for now at least. Feel free to reply with any more tips if you think of any. I read over your posts you linked, Carl. They were certainly informative and helpful, thanks. I'll keep thinking of ways to improve the performance of my test interpreter, but it's so simple, I don't think there's much more that can be done. The shared attribute maps described by that link don't really apply here. In any case, I'm satisfied with the speed. It's still beaten by a BF to C translator combined with gcc -O2 though, that'd be a tough case to beat. =) -Andrew On Tue, Mar 29, 2011 at 5:33 AM, Armin Rigo <arigo@tunes.org> wrote:
Hi Andrew,
When the optimizer encounters a "pure" function, it must compare the objects "promote - promote the argument from a variable into a constant". Could
On Mon, Mar 28, 2011 at 7:21 PM, Andrew Brown <brownan@gmail.com> wrote: this
be an appropriate alternate to the @purefunction solution? Or, I'm guessing, does it just mean the name bracket_map won't change bindings, but does not impose a restriction on mutating the dictionary?
One point of view on 'promote' is to mean "this variable was red, but now turn it green (i.e. make it constant)". It has no effect on a variable that is already green (= a constant).
We have no support for considering that a dict is immutable, so it needs to be done with @purefunction. But to be effective, @purefunction must receive constant arguments; so in one or two places in the source code of PyPy you will find a construction like:
x = hint(x, promote=True) # turn x into a constant some_pure_function(x) # call this pure function on x
Indeed, Carl Friedrich's blog series explains it nicely, but it should also mention that when the hints described in the blog are applied not to integer but to pointers, they apply only to the pointers themselves, not on the fields of the objects they point to.
A bientôt,
Armin.
On 31/03/11 14:28, Andrew Brown wrote:
In any case, I'm satisfied with the speed. It's still beaten by a BF to C translator combined with gcc -O2 though, that'd be a tough case to beat. =)
what happens if you combine the BF to C with gcc -O0 or -O1? Anyway, I think that if you feel like writing a post explaining your experience with using pypy and its jit for writing an interpreter, we could publish it on our blog. I suppose it would be useful/interesting for other people as well. What do the others think?
Compiling with -O0 is really quick, but the runtime is fairly slow. I haven't tried with -O1. -O2 takes a few seconds to compile, but that plus runtime is still faster than the pypy version with jit, but not by too much (I'm recalling the tests I did with the mandelbrot program specifically). I can get some actual numbers later today. Sure I'll write up a post. This was a lot of fun, and I think it's a great way to teach people how pypy works. -Andrew On Thu, Mar 31, 2011 at 8:33 AM, Antonio Cuni <anto.cuni@gmail.com> wrote:
On 31/03/11 14:28, Andrew Brown wrote:
In any case, I'm satisfied with the speed. It's still beaten by a BF to C translator combined with gcc -O2 though, that'd be a tough case to beat. =)
what happens if you combine the BF to C with gcc -O0 or -O1?
Anyway, I think that if you feel like writing a post explaining your experience with using pypy and its jit for writing an interpreter, we could publish it on our blog. I suppose it would be useful/interesting for other people as well.
What do the others think?
Sure I'll write up a post. This was a lot of fun, and I think it's a great way to teach people how pypy works.
I'd love to read a post on this. Perhaps I'll get a few pointers that I can use in my Clojure-pypy port. Timothy -- “One of the main causes of the fall of the Roman Empire was that–lacking zero–they had no way to indicate successful termination of their C programs.” (Robert Firth)
On 31 March 2011 05:33, Antonio Cuni <anto.cuni@gmail.com> wrote:
On 31/03/11 14:28, Andrew Brown wrote:
In any case, I'm satisfied with the speed. It's still beaten by a BF to C translator combined with gcc -O2 though, that'd be a tough case to beat. =)
What if bf code was really really large? bf to c then gcc could take a hit as it might thrash cpu cache, as single pass gcc doesn't know what a given program would actually do at runtime. jit'd rpy would only have 1 hotspot, always in cache, and might be a little smarter too. I suppose it's hard to beat 2-pass (profile driven optimized) compiled c though.
what happens if you combine the BF to C with gcc -O0 or -O1?
Anyway, I think that if you feel like writing a post explaining your experience with using pypy and its jit for writing an interpreter, we could publish it on our blog. I suppose it would be useful/interesting for other people as well.
What do the others think?
I think it can be a great example. It's very educational ;-) It could go into official docs/howto too.
On Thu, Mar 31, 2011 at 2:09 PM, Dima Tisnek <dimaqq@gmail.com> wrote:
What if bf code was really really large?
I've only been testing with the examples at hand included in my repo. The mandelbrot and towers of hanoi examples are pretty big though. If you can find some larger examples, I'd like to try them. I think it can be a great example. It's very educational ;-)
It could go into official docs/howto too.
Awesome! I'm working on writing up everything, it's turning out to be pretty long. I'm assuming no prior PyPy knowledge in the readers though =) Here are a few numbers from tests I just did. python double-interpreted: > 78m (did not finish) pypy-c (with jit) double-interpreted: 41m 34.528s translated interpreter no jit: 45s translated interpreter jit: 7.5s translated direct to C, gcc -O0 translate: 0.2s compile: 0.4s run: 18.5s translated direct to C, gcc -O1 translate: 0.2s compile: 0.85s run: 1.28s translated direct to C, gcc -O2 translate: 0.2s compile: 2.0s run: 1.34s These were all running the mandelbrot program. -Andrew
On 31/03/11 22:05, Andrew Brown wrote:
python double-interpreted: > 78m (did not finish) pypy-c (with jit) double-interpreted: 41m 34.528s
this is interesting. We are beating cpython by more than 2x even in a "worst case" scenario, because interpreters in theory are not a very good target for tracing JITs. However, it's not the first time that we experience this, so it might be that this interpreter/tracing JIT thing is just a legend :-)
translated interpreter no jit: 45s translated interpreter jit: 7.5s translated direct to C, gcc -O0 translate: 0.2s compile: 0.4s run: 18.5s translated direct to C, gcc -O1 translate: 0.2s compile: 0.85s run: 1.28s translated direct to C, gcc -O2 translate: 0.2s compile: 2.0s run: 1.34s
these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than -O1 and -O2. Pretty good, I'd say :-) ciao, anto
On Thu, Mar 31, 2011 at 6:00 PM, Antonio Cuni <anto.cuni@gmail.com> wrote:
On 31/03/11 22:05, Andrew Brown wrote:
python double-interpreted: > 78m (did not finish) pypy-c (with jit) double-interpreted: 41m 34.528s
this is interesting. We are beating cpython by more than 2x even in a "worst case" scenario, because interpreters in theory are not a very good target for tracing JITs. However, it's not the first time that we experience this, so it might be that this interpreter/tracing JIT thing is just a legend :-)
Well the issue with tracing an interpreter is the large number of paths, a brainfuck interpreter has relatively few paths compared to something like a Python VM.
translated interpreter no jit: 45s translated interpreter jit: 7.5s translated direct to C, gcc -O0 translate: 0.2s compile: 0.4s run: 18.5s translated direct to C, gcc -O1 translate: 0.2s compile: 0.85s run: 1.28s translated direct to C, gcc -O2 translate: 0.2s compile: 2.0s run: 1.34s
these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than -O1 and -O2. Pretty good, I'd say :-)
ciao, anto _______________________________________________ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero
Submitted for everyone's approval, I've written a draft of a pypy tutorial going over everything I learned in writing this example interpreter. https://bitbucket.org/brownan/pypy-tutorial/src See the main document at tutorial.rst <https://bitbucket.org/brownan/pypy-tutorial/src/c0bebf4728a5/tutorial.rst>I'd love some feedback on it. I've made an effort to keep things accurate yet simple, but if there are any inaccuracies, let me know. Or fork the repo and make the correction yourself =) -Andrew On Thu, Mar 31, 2011 at 6:29 PM, Alex Gaynor <alex.gaynor@gmail.com> wrote:
On Thu, Mar 31, 2011 at 6:00 PM, Antonio Cuni <anto.cuni@gmail.com> wrote:
On 31/03/11 22:05, Andrew Brown wrote:
python double-interpreted: > 78m (did not finish) pypy-c (with jit) double-interpreted: 41m 34.528s
this is interesting. We are beating cpython by more than 2x even in a "worst case" scenario, because interpreters in theory are not a very good target for tracing JITs. However, it's not the first time that we experience this, so it might be that this interpreter/tracing JIT thing is just a legend :-)
Well the issue with tracing an interpreter is the large number of paths, a brainfuck interpreter has relatively few paths compared to something like a Python VM.
translated interpreter no jit: 45s translated interpreter jit: 7.5s translated direct to C, gcc -O0 translate: 0.2s compile: 0.4s run: 18.5s translated direct to C, gcc -O1 translate: 0.2s compile: 0.85s run: 1.28s translated direct to C, gcc -O2 translate: 0.2s compile: 2.0s run: 1.34s
these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than -O1 and -O2. Pretty good, I'd say :-)
ciao, anto _______________________________________________ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Alex
-- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero
Hi Andrew, On Mon, Apr 4, 2011 at 4:12 PM, Andrew Brown <brownan@gmail.com> wrote:
Submitted for everyone's approval, I've written a draft of a pypy tutorial going over everything I learned in writing this example interpreter. https://bitbucket.org/brownan/pypy-tutorial/src
Excellent and, as far as I can tell, very clear too! A bientôt, Armin.
On 04/04/2011 04:12 PM, Andrew Brown wrote:
Submitted for everyone's approval, I've written a draft of a pypy tutorial going over everything I learned in writing this example interpreter.
https://bitbucket.org/brownan/pypy-tutorial/src
See the main document at tutorial.rst
<https://bitbucket.org/brownan/pypy-tutorial/src/c0bebf4728a5/tutorial.rst>I'd love some feedback on it. I've made an effort to keep things accurate yet simple, but if there are any inaccuracies, let me know. Or fork the repo and make the correction yourself =)
Looks very nice! Would you be up to making a guest post out of this on the PyPy blog? Carl Friedrich
On Mon, Apr 4, 2011 at 11:22 AM, Carl Friedrich Bolz <cfbolz@gmx.de> wrote:
Looks very nice! Would you be up to making a guest post out of this on the PyPy blog?
Sure! What needs to be done to turn it into a blog post and get it posted?
I assume there are format considerations, but I'm also open to any content suggestions and feedback before it "goes live". -Andrew
On 04/04/2011 05:43 PM, Andrew Brown wrote:
On Mon, Apr 4, 2011 at 11:22 AM, Carl Friedrich Bolz <cfbolz@gmx.de <mailto:cfbolz@gmx.de>> wrote:
Looks very nice! Would you be up to making a guest post out of this on the PyPy blog?
Sure! What needs to be done to turn it into a blog post and get it posted? I assume there are format considerations, but I'm also open to any content suggestions and feedback before it "goes live".
I looked again, added two places that could use small fixes. And I updated two links, see my merge request. Apart from that, the blog post would not need many changes. It would need an introductionary line like: "This is a guest post by Andrew Brown. It's a tutorial for how to write an interpreter with PyPy, generating a JIT. It is suitable for beginners and assumes very little knowledge of PyPy." Then we should link to the repo, and replace all file links with links to bitbucket. I can do all that, and post it (tomorrow), if you are fine with that. Carl Friedrich
Thanks for the feedback. I'll clarify those parts, and I have a few touch-ups of my own. Also, I think I forgot to add my name =) I'm fine with you posting it as you described. An into line like that was just what I had in mind. I'd wait until tomorrow though to see if any other feedback surfaces. On Mon, Apr 4, 2011 at 12:17 PM, Carl Friedrich Bolz <cfbolz@gmx.de> wrote:
On 04/04/2011 05:43 PM, Andrew Brown wrote:
On Mon, Apr 4, 2011 at 11:22 AM, Carl Friedrich Bolz <cfbolz@gmx.de <mailto:cfbolz@gmx.de>> wrote:
Looks very nice! Would you be up to making a guest post out of this on the PyPy blog?
Sure! What needs to be done to turn it into a blog post and get it posted? I assume there are format considerations, but I'm also open to any content suggestions and feedback before it "goes live".
I looked again, added two places that could use small fixes. And I updated two links, see my merge request. Apart from that, the blog post would not need many changes. It would need an introductionary line like:
"This is a guest post by Andrew Brown. It's a tutorial for how to write an interpreter with PyPy, generating a JIT. It is suitable for beginners and assumes very little knowledge of PyPy."
Then we should link to the repo, and replace all file links with links to bitbucket. I can do all that, and post it (tomorrow), if you are fine with that.
Carl Friedrich
On 04/04/11 17:43, Andrew Brown wrote:
Sure! What needs to be done to turn it into a blog post and get it posted? I assume there are format considerations, but I'm also open to any content suggestions and feedback before it "goes live".
Hello Andrew, thanks for the tutorial, it's really well written and easy to read. Two notes: 1) do you know about the existence of rlib.streamio? It's is part of the "RPython standard library" and it allows you to read/write files in a higher level way than file descriptors 2) Maybe the tutorial is a bit too long to fit in just one post; what about splitting it into two parts? (e.g., one until "Adding JIT" and one after). ciao, Anto
On Mon, Apr 4, 2011 at 12:34 PM, Antonio Cuni <anto.cuni@gmail.com> wrote:
1) do you know about the existence of rlib.streamio? It's is part of the "RPython standard library" and it allows you to read/write files in a higher level way than file descriptors
No, I didn't. That's good to know. I don't think it's worth updating the examples though, so unless you disagree, I'll just add a note about this module's existence.
2) Maybe the tutorial is a bit too long to fit in just one post; what about splitting it into two parts? (e.g., one until "Adding JIT" and one after).
Yes, it is quite long.
Carl, feel free to break it up as necessary when you post it. Breaking it up at the "Adding JIT" section seems ideal, since both parts are useful on their own. -Andrew
On 04/04/11 19:46, Andrew Brown wrote:
1) do you know about the existence of rlib.streamio? It's is part of the "RPython standard library" and it allows you to read/write files in a higher level way than file descriptors
No, I didn't. That's good to know. I don't think it's worth updating the examples though, so unless you disagree, I'll just add a note about this module's existence.
sure, I think that for this example, using fd is fine. Btw, in case you want to do more with pypy, having a look to rlib might be a good idea, there is useful stuff there :) ciao, Anto
On Mon, Apr 4, 2011 at 2:12 PM, Antonio Cuni <anto.cuni@gmail.com> wrote:
sure, I think that for this example, using fd is fine.
Btw, in case you want to do more with pypy, having a look to rlib might be a good idea, there is useful stuff there :)
Definitely.
In any case, I've made some changes, re-worded some things. Carl, I've addressed your suggestions, let me know what you think. I also re-worded a few things in the "Adding JIT" section to make it flow a bit better assuming it will be split up. It may still need some editing though. -Andrew
On 04/04/2011 10:28 PM, Andrew Brown wrote:
On Mon, Apr 4, 2011 at 2:12 PM, Antonio Cuni <anto.cuni@gmail.com <mailto:anto.cuni@gmail.com>> wrote:
sure, I think that for this example, using fd is fine.
Btw, in case you want to do more with pypy, having a look to rlib might be a good idea, there is useful stuff there :)
Definitely.
In any case, I've made some changes, re-worded some things. Carl, I've addressed your suggestions, let me know what you think.
I also re-worded a few things in the "Adding JIT" section to make it flow a bit better assuming it will be split up. It may still need some editing though.
Looked good, I just went ahead and posted the first part: http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.... Will do the second part tomorrow. Thanks a lot for the tutorial, I think it's really great. Carl Friedrich
Nice one! hehe, I like how you managed to avoid the first letter of the second word of the language ;)
On Tue, Apr 5, 2011 at 8:54 AM, René Dudfield <renesd@gmail.com> wrote:
hehe, I like how you managed to avoid the first letter of the second word of the language ;)
=) yeah, I had to think about that a bit. Looked good, I just went ahead and posted the first part:
http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy....
Will do the second part tomorrow. Thanks a lot for the tutorial, I think it's really great.
Thanks! It looks great up there. I corrected a typo and changed the wording in 2 places. They're not any huge deals, but if you want to edit the post, see my changes here: https://bitbucket.org/brownan/pypy-tutorial/changeset/8cfb3cd72515 (summary: re-inventing -> re-implementing, toochain -> toolchain, and clarified that the mandelbrot program is written in BF) -Andrew
Hi, On Tue, Apr 5, 2011 at 3:40 PM, Andrew Brown <brownan@gmail.com> wrote:
I corrected a typo and changed the wording in 2 places. They're not any huge deals, but if you want to edit the post, see my changes here: https://bitbucket.org/brownan/pypy-tutorial/changeset/8cfb3cd72515
Thanks, applied. I also removed your explicit e-mail address, and replaced it with a link to one of your previous posts on this mailing list, from where people can still get your e-mail if they want --- but at least it's partially filtered against spammers. A bientôt, Armin.
On 04/05/2011 03:54 PM, Armin Rigo wrote:
Hi,
On Tue, Apr 5, 2011 at 3:40 PM, Andrew Brown<brownan@gmail.com> wrote:
I corrected a typo and changed the wording in 2 places. They're not any huge deals, but if you want to edit the post, see my changes here: https://bitbucket.org/brownan/pypy-tutorial/changeset/8cfb3cd72515
Thanks, applied. I also removed your explicit e-mail address, and replaced it with a link to one of your previous posts on this mailing list, from where people can still get your e-mail if they want --- but at least it's partially filtered against spammers.
Second post is up too: http://bit.ly/fLjGHs Thanks again, Andrew! FWIW, the first post is already on place four of all PyPy blog posts in the ranking of page impressions. Carl Friedrich
Hmm, looks like the line numbers for the JIT trace output are mis-aligned, although it may just be my browser (Chrome beta). Looks fine in Firefox. Oh well. But anyways... On Wed, Apr 6, 2011 at 9:13 AM, Carl Friedrich Bolz <cfbolz@gmx.de> wrote:
Thanks again, Andrew! FWIW, the first post is already on place four of all PyPy blog posts in the ranking of page impressions.
You're welcome! That's awesome to hear, I'm glad I could contribute. Also, Dan, if you wanted to post your version I'm curious to see your approach. -Andrew
No good, it still looks like this: http://i.imgur.com/nuLIf.png Chrome 12.0.712.0 dev on Ubuntu. -A On Wed, Apr 6, 2011 at 10:05 AM, Carl Friedrich Bolz <cfbolz@gmx.de> wrote:
On 04/06/2011 04:03 PM, Andrew Brown wrote:
Hmm, looks like the line numbers for the JIT trace output are mis-aligned, although it may just be my browser (Chrome beta). Looks fine in Firefox. Oh well.
Can you re-load? I tried to fix it.
Carl Friedrich
line numbers are totally broken in opera - looks like double-digit number are split and spilt to the next line safari looks better, but still as if line numbers are offset half a line, so as if numbers point between the source code lines. I used SyntaxHighlighter in blogs before, that works, highlights well, gives you line numbers and doesn't interfere with selection. It's presumably tested with all the browsers out there and it's a simple drop-in. d. On 6 April 2011 07:12, Andrew Brown <brownan@gmail.com> wrote:
No good, it still looks like this: http://i.imgur.com/nuLIf.png Chrome 12.0.712.0 dev on Ubuntu. -A
On Wed, Apr 6, 2011 at 10:05 AM, Carl Friedrich Bolz <cfbolz@gmx.de> wrote:
On 04/06/2011 04:03 PM, Andrew Brown wrote:
Hmm, looks like the line numbers for the JIT trace output are mis-aligned, although it may just be my browser (Chrome beta). Looks fine in Firefox. Oh well.
Can you re-load? I tried to fix it.
Carl Friedrich
_______________________________________________ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Hey sorry about that, I'm working full time now, and balancing life, school and full time work is new for me... http://paste.pocoo.org/show/366731/ Is it in its current state. I actually haven't tried it on "stock" PyPy. I wrote an extra JIT optimization in the fold_intadd branch, which gave me around 50% on mandelbrot, it may yield more for you, you might try it out (I hope to have it merged once I get another set of eyes on it, but like I said, life's been hectic, so I haven't gotten the ball rolling on code review yet). My 99bottles.bf performance is still abysmal. I'm going to go and implement caching of matching brackets right now (lunch break woo!) and see what that does for my performance. Cheers, Dan On Wed, Apr 6, 2011 at 7:03 AM, Andrew Brown <brownan@gmail.com> wrote:
Hmm, looks like the line numbers for the JIT trace output are mis-aligned, although it may just be my browser (Chrome beta). Looks fine in Firefox. Oh well. But anyways...
On Wed, Apr 6, 2011 at 9:13 AM, Carl Friedrich Bolz <cfbolz@gmx.de> wrote:
Thanks again, Andrew! FWIW, the first post is already on place four of all PyPy blog posts in the ranking of page impressions.
You're welcome! That's awesome to hear, I'm glad I could contribute. Also, Dan, if you wanted to post your version I'm curious to see your approach. -Andrew
participants (9)
-
Alex Gaynor
-
Andrew Brown
-
Antonio Cuni
-
Armin Rigo
-
Carl Friedrich Bolz
-
Dan Roberts
-
Dima Tisnek
-
René Dudfield
-
Timothy Baldridge