[pypy-dev] array performace?
p.giarrusso at gmail.com
Sat Jul 3 19:22:54 CEST 2010
On Sat, Jul 3, 2010 at 16:20, William Leslie
<william.leslie.ttg at gmail.com> wrote:
> On 3 July 2010 08:56, Bengt Richter <bokr at oz.net> wrote:
>> On 07/02/2010 11:35 AM Carl Friedrich Bolz wrote:
> Paolo recently bemoaned the
> trend toward writing modules at interp level for speed* - I'm not
> really sure if it is a trend now or not - but at some point it might
> be fun looking at optional typing annotations that compile the case
> for those assumptions. It might be a precursor to cython or pyrex
> * with justification : though ok for the stdlib, translating pypy
> every time you add an extension module is going to get old. fast.
That's one point, but it's not the biggest one. I guess that if that
happens often enough, at some point one will need to implement
separate compilation for RPython as well (at least for development). I
mean, whole-program optimization (which one would maybe lose) is
optional in other languages.
1) The real problem is that you don't want users to need interp-level
coding for their program. If they need, there's something wrong (and I
now think/hope it's not the case).
2) Another instance of the same issue happens when Python developers
are suggested to write extensions in C or to perform inlining by hand.
3) The last case is users avoiding Python (or another high-level
language) altogether because of bad performance.
The common factor is that in all cases, a weakness of the
implementation makes the abstraction less desirable, and thus user
programs are hand-optimized and become less maintainable.
That's why efficient JITs (including PyPy) are important. It is
interesting that 2) stems also from the desire of Guido van Rossum to
keep CPython simple, while complicating life for its users.
>> Could such assertions allow e.g. a list to be implemented as a homogeneous vector
>> of unboxed representations?
> Pypy is already great in terms of data layout, for example pypy uses
> shadow classes in the form of 'structures', but supporting more
> complicated layout optimisations (such as row or column order storage
> for structures so the JIT can do relational algebra) would probably be
> unique. It doesn't seem so far off considering that in the progression
> (list int) -> (list unpacked tuple int) -> (list unpacked homogenous
> structure), the first step, limiting or otherwise determining the item
> type, is the most complicated.
> As for mixing languages, that is the pinnacle of awesome; but this is
> probably not the list for it. MLVMs such as JVM+JSR-292, Racket, GNU
> Guile, and Parrot; it seems to me that once you settle on an execution
> / object model and / or bytecode format, you've already decided what
> languages (where the 's' seems superfluous) support is going to be
> first class for.
You are right about "first class support". But assembly doesn't offer
first class support for anything, and still you can make it work. Of
course, bytecodes are more limited, but sometimes you might manage.
I had 3 colleague students who implemented, for instance, a
Python-to-JVM bytecode compiler which was way faster than Jython.
Which was the trick? Python methods were encoded as Java classes
(maybe with static methods), and they performed inline-caching in
bytecode, i.e., each call was converted to something like if
(target.class() == this_class) specificMethodClass.perform(target,
args) else (perform normal method resolution, and possibly regenerate
I'm unsure about the actual call produced for the call - either they
used static classes, or they just relied on inline-caching/inlining by
the underlying JIT.
Another detail (I guess) is that you need some form of shadow classes
(like Self, V8, and also PyPy I guess - if you talk about the same
Unfortunately, I don't know whether they published their code - it was
for a term project for a course held by Lars Bak (the V8 author) in
It worked quite well, and there was still potential for optimization.
I don't know how feature-complete they were, though; still, they
managed to perform a meta-implementation of Inline-Caching (and the
same trick allows also polymorphic inline-caching), where meta- is
used like in meta-interpreter.
I guess it would still be possible to interoperate with Java classes -
you can still provide, I think, a conventional interface (where
methods become just... methods), even if possibly it will be slower.
> Other impedance mismatches, such as calling conventions (eg,
> arguments), reduction methods (applicative vs normal order vs
> call-by-name), mutable strings, TCE, various type systems involving
> structural types, Oliviera/Sulzmann classes, existential types,
> dependant types, value types, single and multiple inheretance, and the
> completely insane (prolog) make implementing real multi-language
> platforms a mammoth task. And even if you manage to get that working,
> how do you make exception hierarchies work?
> Why can't I cast my Java
> ArrayList as a C# ArrayList? etc.
Well, this latter question seems somehow solved by .NET, even if they
don't really support the original libraries. Or you just use the VM
and write conversion functions for that.
> Sure, you could probably hook up a few of the bundled VMs, IO or E
> would make for a great twisted integration DSL. But actually
> convincing people to lock themselves into an unstandardised, unproven
> chimera? Lets just say that doing multi-language right is NP-hard.
> Doing it while targeting JVM and CLI, offering platform integration
> while supporting exotic language constructs like real continuations?
Now that you mention it, I wonder about how Scala's future support (in
next release) for (delimited) continuations will work.
Paolo Giarrusso - Ph.D. Student
More information about the Pypy-dev