Is Unladen Swallow dead?

Jean-Paul Calderone calderone.jeanpaul at
Thu Nov 18 23:59:49 CET 2010

On Nov 18, 1:31 pm, John Nagle <na... at> wrote:
> On 11/18/2010 4:24 AM, BartC wrote:
> > "John Nagle" <na... at> wrote in message
> >news:4ce37e01$0$1666$742ec2ed at
> >> On 11/16/2010 10:24 PM, swapnil wrote:
> >>> AFAIK, the merging plan was approved by Guido early this year. I
> >>> guess Google is expecting the community to drive the project
> >>> from here on. That was the whole idea for merging it to mainline.
> >>> From my last conversation with Collin, they are targeting Python
> >>> 3.3
> >> I think it's dead. They're a year behind on quarterly releases.
> >> The last release was Q3 2009. The project failed to achieve its
> >> stated goal of a 5x speedup. Not even close. More like 1.5x
> >> (
> > There must have been good reasons to predict a 5x increase.
>      For Java, adding a JIT improved performance by much more than that.
> Hard-code compilers for LISP have done much better than 5x.  The
> best Java and LISP compilers approach the speed of C, while CPython
> is generally considered to be roughly 60 times slower than C.  So
> 5x probably looked like a conservative goal.  For Google, a company
> which buys servers by the acre, a 5x speedup would have a big payoff.
> > Assuming the 5x speedup was shown to be viable (ie. performing the
> > same benchmarks, on the same data, can be done that quickly in any
> > other language, and allowing for the overheads associated with
> > Python's dynamic nature), then what went wrong?
>       Python is defined by what a naive interpreter with late binding
> and dynamic name lookups, like CPython, can easily implement.  Simply
> emulating the semantics of CPython with generated code doesn't help
> all that much.
>       Because you can "monkey patch" Python objects from outside the
> class, a local compiler, like a JIT, can't turn name lookups into hard
> bindings.  Nor can it make reliable decisions about the types of
> objects.  That adds a sizable performance penalty. Short of global
> program analysis, the compiler can't tell when code for the hard cases
> needs to be generated.  So the hard-case code, where you figure out at
> run-time, for ever use of "+", whether "+" is addition or concatenation,
> has to be generated every time.  Making that decision is far slower
> than doing an add.

This isn't completely accurate.  It *is* possible to write a JIT
for a Python runtime which has fast path code for the common case, the
where the meaning of "+" doesn't change between every opcode.  PyPy
produced some pretty good results with this approach.

For those who haven't seen it yet, has some
which reflect fairly well on PyPy's performance for benchmarks that
are not
entirely dissimilar to real world code.


More information about the Python-list mailing list