
2017-03-16 2:04 GMT+01:00 Wang, Peter Xihong <peter.xihong.wang@intel.com>:
Understood on the obsolete benchmark part. This was the work done before the new benchmark was created on github.
I strongly advice you to move to performance. It also has a nice a API. It now produces a JSON file with *all* data, instead of just writing into summaries into stdout.
I thought this is related, and thus didn't open a new thread.
The other thread was a discussion about statistics, how to summarize all timing into two numbers :-)
Maybe you could point me to one single micro-benchmark for the time being, and then we could compare result across?
The "new" performance project is a fork of the old "benchmark" project. Benchmark names are very close or even the same for many benchmarks.
If you would like to validate that your benchmark runner is stable: run call_method and call_simple microbenchmarks on different revisions of CPython, reboot sometimes the computer used to run benchmarks, and make sure that results are stable.
Compare them with results of speed.python.org.
call_method: https://speed.python.org/timeline/#/?exe=5&ben=call_method&env=1&revs=50&equid=off&quarts=on&extr=on
call_simple: https://speed.python.org/timeline/#/?exe=5&ben=call_simple&env=1&revs=50&equid=off&quarts=on&extr=on
Around november and december 2016, you should notice a significant speedup on call_method.
The best is to be able to avoid "temporary spikes" like this one: https://haypo.github.io/analysis-python-performance-issue.html
The API of the perf project, PGO and LTO compilation, new performance using perf, "perf system tune" for system tuning, etc. helped to get more stable results.
Victor