[Speed] Tool to run Python microbenchmarks

Victor Stinner victor.stinner at gmail.com
Thu Feb 11 17:39:17 EST 2016


Hi,

To run "micro"-benchmarks on "micro"-optimizations, I started to use
timeit, but in my experience timeit it far from reliable.

When I say micro: I'm talking about a test which takes less than 1000
ns, sometimes even a few nanoseconds!

You always have to run the same micro-benchmark when timeit *at least*
5 times to find the "real" "minimum" runtime.

That's why I wrote my own tool to run microbenchmarks:
https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py

Yury suggested me to add this tool to the Python benchmark project.
I'm ok with that, but only if we rename it to "microbench.py" :-) I
wrote this tool to compare micro-optimizations with a long list of
very simple tests. The result is written into a file. Then you can
compare two files and compare more files, and maybe even compare
multiple files to a "reference". It "hides" difference smaller than 5%
to ignore the noise.

The main feature is benchmark.py is that it calibrates the benchmark
using time to choose the number of runs and number of loops. I
proposed a similar idea for perf.py:
https://bugs.python.org/issue26275

What do you think? Would this tool be useful?

Victor


More information about the Speed mailing list