[Cython] speed.pypy.org

Robert Bradshaw robertwb at math.washington.edu
Wed Apr 27 19:08:44 CEST 2011

On Wed, Apr 27, 2011 at 12:45 AM, Stefan Behnel <stefan_ml at behnel.de> wrote:
> Robert Bradshaw, 26.04.2011 19:52:
>> On Tue, Apr 26, 2011 at 7:50 AM, Stefan Behnel wrote:
>>> Stefan Behnel, 15.04.2011 22:20:
>>>> Stefan Behnel, 11.04.2011 15:08:
>>>>> I'm currently discussing with Maciej Fijalkowski (PyPy) how to get
>>>>> Cython
>>>>> running on speed.pypy.org (that's what I wrote "cythonrun" for). If it
>>>>> works out well, we may have it up in a couple of days.
>>>> ... or maybe not. It may take a little longer due to lack of time on his
>>>> side.
>>>>> I would expect that Cython won't be a big winner in this game, given
>>>>> that
>>>>> it will only compile plain untyped Python code. It's also going to fail
>>>>> entirely in some of the benchmarks. But I think it's worth having it up
>>>>> there, simply as a way for us to see where we are performance-wise and
>>>>> to
>>>>> get quick (nightly) feed-back about optimisations we try. The benchmark
>>>>> suite is also a nice set of real-world Python code that will allow us
>>>>> to
>>>>> find compliance issues.
>>>> Ok, here's what I have so far. I fixed a couple of bugs in Cython and
>>>> got
>>>> at least some of the benchmarks running. Note that they are actually
>>>> simple
>>>> ones, only a single module. Basically all complex benchmarks fail due to
>>>> known bugs, such as Cython def functions not accepting attribute
>>>> assignments (e.g. on wrapping). There's also a problem with code that
>>>> uses
>>>> platform specific names conditionally, such as WindowsError when running
>>>> on
>>>> Windows. Cython complains about non-builtin names here. I'm considering
>>>> to
>>>> turn that into a visible warning instead of an error, so that the name
>>>> would instead be looked up dynamically to let the code fail at runtime
>>>> *iff* it reaches the name lookup.
>>>> Anyway, here are the numbers. I got them with "auto_cpdef" enabled,
>>>> although that doesn't even seem to make that a big difference. The
>>>> baseline is a self-compiled Python 2.7.1+ (about a month old).
>>> [numbers stripped]
>>> And here's the shiny graph:
>>> https://sage.math.washington.edu:8091/hudson/job/cython-devel-benchmarks-py27/lastSuccessfulBuild/artifact/chart.html
>>> It gets automatically rebuilt by this Hudson job:
>>> https://sage.math.washington.edu:8091/hudson/job/cython-devel-benchmarks-py27/
>> Cool. Any history stored/displayed?
> No. Also, the variances are rather large depending on the load of the
> machine.

Of course, that would make the history rather than a snapshot all the
more useful.

> Hudson/Jenkins and all its subprocesses run with a high CPU
> niceness and I/O niceness, so don't expect reproducible results.
> Actually, if we want a proper history, I'd suggest a separate codespeed
> installation somewhere.

Makes sense. How many CPU-hours does it take? If it's not to
intensive, we could probably run it, say, daily as a normal-priority

- Robert

More information about the cython-devel mailing list