PyPy and RPython
sarvilive at gmail.com
Thu Sep 2 10:29:54 CEST 2010
When I think about it these restrictions below seem a very reasonable
tradeoff for performance.
And I can use this for just the modules/sections that are performance
Essentially, the PyPy interpreter can have a restricted mode that
enforces these restriction.
This will help write such RPython code that can then be compiled into
C/ASM using the PyPy Compiler which as I understand can do this today.
If Shedskin generated C++ code is faster than PyPy generate C++ code.
Isn't that just another reason why PyPy and Shedskin should be joining
Wouldn't we want PyPy generated C code to be just as fast as
Afterall thats how the PyPy compiler is built, right? and we do want
that to be fast too?
On Sep 1, 11:39 pm, John Nagle <na... at animats.com> wrote:
> On 9/1/2010 10:49 AM, sarvi wrote:
> > Is there a plan to adopt PyPy and RPython under the python foundation
> > in attempt to standardize both.
> > I have been watching PyPy and RPython evolve over the years.
> > PyPy seems to have momentum and is rapidly gaining followers and
> > performance.
> > PyPy JIT and performance would be a good thing for the Python
> > Community
> > And it seems to be well ahead of Unladen Swallow in performance and in
> > a position to improve quite a bit.
> > Secondly I have always fantasized of never having to write C code yet
> > get its compiled performance.
> > With RPython(a strict subset of Python), I can actually compile it to
> > C/Machine code
> > These 2 seem like spectacular advantages for Python to pickup on.
> > And all this by just showing the PyPy and the Python foundation's
> > support and direction to adopt them.
> > Yet I see this forum relatively quiet on PyPy or Rpython ? Any
> > reasons???
> > Sarvi
> The winner on performance, by a huge margin, is Shed Skin,
> the optimizing type-inferring compiler for a restricted subset
> of Python. PyPy and Unladen Swallow have run into the problem
> that if you want to keep some of the less useful dynamic semantics
> of Python, the heavy-duty optimizations become extremely difficult.
> However, if we defined a High Performance Python language, with
> some restrictions, the problem becomes much easier. The necessary
> restrictions are roughly this:
> -- Functions, once defined, cannot be redefined.
> (Inlining and redefinition do not play well
> -- Variables are implicitly typed for the base types:
> integer, float, bool, and everything else. The
> compiler figures this out automatically.
> (Shed Skin does this now.)
> -- Unless a class uses a "setattr" function or has
> a __setattr__ method, its entire list of attributes is
> known at compile time.
> (In other words, you can't patch in new attributes
> from outside the class unless the class indicates
> it supports that. You can subclass, of course.)
> -- Mutable objects (other than some form of synchronized
> object) cannot be shared between threads. This is the
> key step in getting rid of the Global Interpreter Lock.
> -- "eval" must be restricted to the form that has a list of
> the variables it can access.
> -- Import after startup probably won't work.
> Those are the essential restrictions. With those, Python
> could go 20x to 60x faster than CPython. The failures
> of PyPy and Unladen Swallow to get any significant
> performance gains over CPython demonstrate the futility
> of trying to make the current language go fast.
> Reference counts aren't a huge issue. With some static
> analysis, most reference count updates can be optimized out.
> (As for how this is done, the key issue is to determine whether
> each function "keeps" a reference to each parameter. For
> any function which does not, that parameter doesn't have
> to have reference count updates within the function.
> Most math library functions have this property.
> You do have to analyze the entire program globally, though.)
> John Nagle
More information about the Python-list