Re: [Speed] Tool to run Python microbenchmarks

Hi Victor
timeit does two really terrible things - uses min(time) and disables the garbage collector, which makes it completely unreliable.
On Thu, Feb 11, 2016 at 11:39 PM, Victor Stinner <victor.stinner@gmail.com> wrote:
Hi,
To run "micro"-benchmarks on "micro"-optimizations, I started to use timeit, but in my experience timeit it far from reliable.
When I say micro: I'm talking about a test which takes less than 1000 ns, sometimes even a few nanoseconds!
You always have to run the same micro-benchmark when timeit *at least* 5 times to find the "real" "minimum" runtime.
That's why I wrote my own tool to run microbenchmarks: https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py
Yury suggested me to add this tool to the Python benchmark project. I'm ok with that, but only if we rename it to "microbench.py" :-) I wrote this tool to compare micro-optimizations with a long list of very simple tests. The result is written into a file. Then you can compare two files and compare more files, and maybe even compare multiple files to a "reference". It "hides" difference smaller than 5% to ignore the noise.
The main feature is benchmark.py is that it calibrates the benchmark using time to choose the number of runs and number of loops. I proposed a similar idea for perf.py: https://bugs.python.org/issue26275
What do you think? Would this tool be useful?
Victor
Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed
participants (1)
-
Maciej Fijalkowski