[Cython] speed.pypy.org

Stefan Behnel stefan_ml at behnel.de
Wed Apr 27 09:45:56 CEST 2011

Robert Bradshaw, 26.04.2011 19:52:
> On Tue, Apr 26, 2011 at 7:50 AM, Stefan Behnel wrote:
>> Stefan Behnel, 15.04.2011 22:20:
>>> Stefan Behnel, 11.04.2011 15:08:
>>>> I'm currently discussing with Maciej Fijalkowski (PyPy) how to get Cython
>>>> running on speed.pypy.org (that's what I wrote "cythonrun" for). If it
>>>> works out well, we may have it up in a couple of days.
>>> ... or maybe not. It may take a little longer due to lack of time on his
>>> side.
>>>> I would expect that Cython won't be a big winner in this game, given that
>>>> it will only compile plain untyped Python code. It's also going to fail
>>>> entirely in some of the benchmarks. But I think it's worth having it up
>>>> there, simply as a way for us to see where we are performance-wise and to
>>>> get quick (nightly) feed-back about optimisations we try. The benchmark
>>>> suite is also a nice set of real-world Python code that will allow us to
>>>> find compliance issues.
>>> Ok, here's what I have so far. I fixed a couple of bugs in Cython and got
>>> at least some of the benchmarks running. Note that they are actually
>>> simple
>>> ones, only a single module. Basically all complex benchmarks fail due to
>>> known bugs, such as Cython def functions not accepting attribute
>>> assignments (e.g. on wrapping). There's also a problem with code that uses
>>> platform specific names conditionally, such as WindowsError when running
>>> on
>>> Windows. Cython complains about non-builtin names here. I'm considering to
>>> turn that into a visible warning instead of an error, so that the name
>>> would instead be looked up dynamically to let the code fail at runtime
>>> *iff* it reaches the name lookup.
>>> Anyway, here are the numbers. I got them with "auto_cpdef" enabled,
>>> although that doesn't even seem to make that a big difference. The
>>> baseline is a self-compiled Python 2.7.1+ (about a month old).
>> [numbers stripped]
>> And here's the shiny graph:
>> https://sage.math.washington.edu:8091/hudson/job/cython-devel-benchmarks-py27/lastSuccessfulBuild/artifact/chart.html
>> It gets automatically rebuilt by this Hudson job:
>> https://sage.math.washington.edu:8091/hudson/job/cython-devel-benchmarks-py27/
> Cool. Any history stored/displayed?

No. Also, the variances are rather large depending on the load of the 
machine. Hudson/Jenkins and all its subprocesses run with a high CPU 
niceness and I/O niceness, so don't expect reproducible results.

Actually, if we want a proper history, I'd suggest a separate codespeed 
installation somewhere.


More information about the cython-devel mailing list