Re: [Speed] Should we change what benchmarks we have?
![](https://secure.gravatar.com/avatar/daa45563a98419bb1b6b63904ce71f95.jpg?s=120&d=mm&r=g)
2016-02-11 19:36 GMT+01:00 Brett Cannon <brett@python.org>:
Are we happy with the current benchmarks?
bm_regex8 looks unstable, but I don't know if it's an issue of the benchmark itself or perf.py (see the other thread "[Speed] Any changes we want to make to perf.py?").
I spent a lot of time (probably too much!) last months trying to micro-optimize some parts of Python, specially operations on Python int. See for example this long issue: https://bugs.python.org/issue21955
At the end, the discussed patched only makes two benchmarks faster: nbody & spectral_norm.
I'm disappointed because I don't know if it's worth to take these micro-optimizations only to run two *benchmarks* faster. Are they representative of "regular" Python code and "real-world applications"? Or are they typical maths benchmark?
For maths, we all know that pure Python sucks and that maybe better options are available: PyPy, numba, Cython, etc. For example, PyPy is around 10x faster, whereas discussed micro-optimizations are 1.18x faster in the best case (in one very specific micro-benchmark).
Are there some we want to drop? How about add? Do we want to have explanations as to why each benchmark is included? A better balance of micro vs. macro benchmarks (and probably matching groups)?
For some kinds of optimizations, I consider that a micro-benchmark is enough. I don't have strict rules. Basically, it's when you know that the change cannot introduce slow-down in other cases, but will only benefit on one specific case. So the best is to write a tiny benchmark just for this case.
Victor
participants (1)
-
Victor Stinner