a basic bytecode to machine code compiler
Dan Stromberg
drsalists at gmail.com
Sat Apr 2 15:47:18 EDT 2011
On Sat, Apr 2, 2011 at 12:05 PM, John Nagle <nagle at animats.com> wrote:
> On 4/2/2011 3:30 AM, Stefan Behnel wrote:
>
>> Cython actually supports most Python language features now (including
>> generators in the development branch), both from Python 2 and Python 3.
>> Chances are that the next release will actually compile most of your
>> Python code unchanged, or only with minor adaptations.
>>
>
> Cython requires the user to insert type declarations, though, to
> get a speed improvement.
>
Actually, Cython often gives modest speed improvements without type
declarations, and greater improvements with type declarations. For still
further improvements, you can turn to C datastructures in Cython.
It surprises me how often people think of Cython as a weird dialect of
Python. I know I used to before actually giving Cython a try. Cython's
really pretty close to CPython; it just has some C-like things it can do
too.
Shed Skin has a good type inference system, but it insists on
> a unique type for each object (which includes "null"; a type of
> "str or null" is not acceptable). The rest of Shed Skin, outside
> the type inference system, is not very well developed.
>
> There's a path there to a fast Python with some restrictions.
> The Shed Skin inference engine with the Cython engine might have
> potential.
>
I'm not sure Shedskin and Cython are that compatible in their backends,
though I imagine they could share ideas more. ISTR that Cython already does
some type inference without restricting types much.
> PyPy gets some speedups, mostly by recognizing when numbers can be
> unboxed.
For my current project, I find that Pypy gets quite remarkable speedups,
even when compared to Cython:
http://stromberg.dnsalias.org/~dstromberg/backshift/
The algorithm being tested is a byte-by-byte content-based checksum.
(I believe I've shared the above link here before)
There's no easy way to speed up Python; that's been tried.
> It needs either a very, very elaborate JIT system, more complex
> than the ones for Java or Self, or some language restrictions.
> The main restriction I would impose is to provide a call that says:
> "OK, we're done with loading, initialization, and configuration.
> Now freeze the code." At that moment, all the global
> analysis and compiling takes place. This allows getting rid
> of the GIL and getting real performance out of multithread
> CPUs.
>
Freezing things is a nice idea - one I suggested on the unladen swallow list
a while back.
But Pypy seems to be doing quite well without it, at least in my limited
benchmarking.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20110402/1f5ea465/attachment-0001.html>
More information about the Python-list
mailing list