Re: [Speed] Cython's view on a common benchmark suite
On Thu, Feb 2, 2012 at 08:42, Maciej Fijalkowski <fijall@gmail.com> wrote:
On Thu, Feb 2, 2012 at 3:40 PM, Stefan Behnel <stefan_ml@behnel.de> wrote:
Maciej Fijalkowski, 02.02.2012 14:35:
Oh, we have that feature, it's called CPython. The thing is that Cython doesn't get to see the generated sources, so it won't compile them and instead, CPython ends up executing the code at normal interpreted speed. So there's nothing gained by running the benchmark at all. And even if we found a way to hook into this machinery, I doubt that the static compiler overhead would make this any useful. The whole purpose of generating code is that it likely will not look the same the next time you do it (well, outside of benchmarks, that is), so even a cache is unlikely to help much for real code. It's like PyPy running code in interpreting mode before it gets compiled, except that Cython will never compile this code, even if it turns out to be worth it.
Personally, I rather consider it a feature that users can employ exec() from their Cython code to run code in plain CPython (for whatever reason).
Yes, ok, but I believe this should mean "Cython does not give speedups on this benchmark" and not "we should modify the benchmark".
Oh, I hadn't suggested to modify it. I was merely stating (as part of a longer list) that it's of no use specifically to Cython. I.e., if there's something to gain from having the benchmark runs take less time by disabling benchmarks for specific runtimes, it's one of the candidates on our side.
Stefan
Oh ok, I misread you then, sorry.
I think having a dedicated speed.python machine is specifically so that we can run benchmarks however much we want :) At least Cython does not compile for like an hour...
Yeah, we have tried to make sure the machine we have for all of this is fast enough that we can run all the benchmarks on all measured VMs once a day. If that ever becomes an issue we would probably prune the benchmarks rather than turn them on/off selectively.
But speaking of benchmarks that won't work on other VMs (e.g. twisted under Jython), we will obviously try to minimize how many of those we have. Twisted is somewhat of a special case because (a) PyPy has already put the time into creating the benchmarks and (b) it is used by so many people that measuring its speed is a good thing. Otherwise I would argue that all future benchmarks should be runnable on any VM and not just CPython or any other VMs that support C extensions (numpy is the only exception to this that I can think of because of its popularity and numpypy will extend its reach once the work is complete).
participants (1)
-
Brett Cannon