[pypy-dev] Question on the future of RPython

Terrence Cole list-sink at trainedmonkeystudios.org
Wed Sep 29 22:40:59 CEST 2010


On Wed, 2010-09-29 at 11:37 +0200, Antonio Cuni wrote:
> Hi Terrence, hi all
> 
> On 28/09/10 22:33, Terrence Cole wrote:
> > I assume workflow would go like this:  1) run pypy on a bunch of code in
> > profiling mode, 2) pypy spits out lots of data about what happened in
> > the jit when the program exits, 3) start up external analysis program
> > pointing it at this data, 4) browse the python source with the data from
> > the jit overlayed as color, formatting, etc on top of the source.
> 
> You can already to it (partially) by using the PYPYLOG environment variable 
> like this:
> 
> PYPYLOG=jit-log-opt:mylog ./pypy -m test.pystone
> 
> then, mylog contains all the loops and bridges produced by the jit. The 
> interesting point is that there are also special operations called 
> "debug_merge_point" that are emitted for each python bytecode, so you can 
> easily map the low-level jit instructions back to the original python source.

I think that 'easily' in that last sentence is missing scare-quotes. :-)

> E.g., take lines 214 of pystone:
>      Array1Par[IntLoc+30] = IntLoc
> 
> The corresponding python bytecode is this:
> 214          38 LOAD_FAST                4 (IntLoc)
>               41 LOAD_FAST                0 (Array1Par)
>               44 LOAD_FAST                4 (IntLoc)
>               47 LOAD_CONST               3 (30)
>               50 BINARY_ADD
>               51 STORE_SUBSCR
> 
> 
> By searching in the logs, you find the following (I edited it a bit to improve 
> readability):
> 
> debug_merge_point('<code object Proc8, line 208> #38 LOAD_FAST')
> debug_merge_point('<code object Proc8, line 208> #41 LOAD_FAST')
> debug_merge_point('<code object Proc8, line 208> #44 LOAD_FAST')
> debug_merge_point('<code object Proc8, line 208> #47 LOAD_CONST')
> debug_merge_point('<code object Proc8, line 208> #50 BINARY_ADD')
> debug_merge_point('<code object Proc8, line 208> #51 STORE_SUBSCR')
> p345 = new_with_vtable(ConstClass(W_IntObject))
> setfield_gc(p345, 8, descr=<SignedFieldDescr 
> pypy.objspace.std.intobject.W_IntObject.inst_intval 8>)
> call(ConstClass(ll_setitem__dum_checkidxConst_listPtr_Signed_objectPtr),
>       p333, 38, p345, descr=<VoidCallDescr>)
> guard_no_exception(, descr=<Guard146>) [p1, p0, p71, p345, p312, p3, p4, p6, 
> p308, p315, p335, p12, p13, p14, p15, p16, p18, p19, p178, p26, p320, p328, 
> i124, p25, i329]
> 
> Here, you can see that most opcodes are "empty" (i.e., no operations between 
> one debug_merge_point and the next). In general, all the opcodes that 
> manipulate the python stack are optimized away by the jit, because all the 
> python variables on the stack become "local variables" in the assembler.
> 
> Moreover, you can see that BINARY_ADD is also empty: this probably means that 
> the loop was specialized for the specific value of IntLoc, so the addition has 
> been constant-folded away.  Indeed, the only opcode that do real work is 
> STORE_SUBSCR.  What it does it to allocate a new W_IntObject whose value is 8 
> (i.e., boxing IntLoc on the fly, because it's escaping), and store it into the 
> element 38 of the list stored in p333.
> 
> Finally, we check that no exception was raised.

Wow, thank you for the awesome explanation.  I think the only surprising
thing in there is that I actually understood all of that.

> Obviously, when presenting these information to the user you must consider 
> that there is not a 1-to-1 mapping from python source to jit loops.  In the 
> example above, the very same opcodes are compiled also in another loop (which 
> by chance it has the same jit-operations, but they might also be very 
> different, depending on the cases).

Currently, in my hacked together parsing chain, the low-level parser
keeps a reference to the underlying token when it creates a new node and
subsequently the ast builder keeps a references to the low-level parse
node when it creates an ast node.  This way, I can easily map down to
the individual source chars and full context when walking the AST to do
highlighting, linting, etc.  

My first inclination would be to continue this chain and add a bytecode
compiler on top of the ast builder.  This would keep ast node references
in the instructions it creates.  If the algorithms don't diverge too
much, I think this would allow the debug output to be mapped all the way
back to the source chars with minimal effort.  I'm not terrifically
familiar with the specifics of how python emits bytecode from an ast, so
I'd appreciate any feedback if you think this is crazy-talk.

> As you can see, there is already lot of information that can be useful to the 
> user.  However, don't ask me how to present it visually :-)

Neither do I, but finding out is going to be the fun part.  

> ciao,
> anto

I'm excited to try some of this out, but unfortunately, there is an
annoying problem in the way.  All of my work in the last year has been
on python3.  Having worked in python3 for awhile now, my opinion is that
it's just a much better language -- surprisingly so, considering how
little it changed from python2.  If pypy supported python3, then I could
maintain my parser as a diff against pypy (you are switching to
Mercurial at some point, right?), which would make it much easier to
avoid divergence.

So what I'm getting at is: what is the pypy story for python3 support?
I haven't seen anything on pypy-dev, or in my occasional looks at the
repository, to suggest that it is being worked on but I'm sure you have
a plan of some sort.  I'm willing to help out with python3 support, if I
can do so without getting in anyone's way.  It seems like the sort of
thing that will be disruptive, however, so I have been leery of jumping
in, considering how little time I have to contribute, at the moment.

In my mind, the python3 picture is something like:
At the compilation level, it's easy enough to dump Grammar3.2 in
pypy/interpreter/pyparser/data and to modify astbuilder for python3 --
I'll backport the changes I made, if you want.  Interpreter support for
the new language features will be harder, but that's probably already
done since 2.7 is almost working.  The only potential problems I see are
the big string/unicode switch and the management of the fairly large
changes to astbuilder -- I'm sure you want to continue supporting
python2 into the future.  I don't know how much the bytecode changed
between 2 and 3, so I'm not sure if there are jit issues to worry about.
Am I missing anything big?


-Terrence





More information about the Pypy-dev mailing list