[docs] [issue15369] pybench and test.pystone poorly documented

STINNER Victor report at bugs.python.org
Thu Sep 15 05:11:25 EDT 2016


STINNER Victor added the comment:

Hum, since the discussion restarted, I reopen the issue ...

"Well, pybench is not just one benchmark, it's a whole collection of benchmarks for various different aspects of the CPython VM and per concept it tries to calibrate itself per benchmark, since each benchmark has different overhead."

In the performance module, you now get individual timing for each pybench benchmark, instead of an overall total which was less useful.


"The number of iterations per benchmark will not change between runs, since this number is fixed in each benchmark."

Please take a look at the new performance module, it has a different design. Calibration is based on minimum time per sample, no more on hardcoded things. I modified all benchmarks, not only pybench.


"BTW: Why would you want to run benchmarks in child processes and in parallel ?"

Child processes are run sequentially.

Running benchmarks in multiple processes help to get more reliable benchmarks. Read my article if you want to learn more about the design of my perf module:
http://haypo-notes.readthedocs.io/microbenchmark.html#my-articles


"Ideally, the pybench process should be the only CPU intense work load on the entire CPU to get reasonable results."

The perf module automatically uses isolated CPU. It strongly suggests to use this amazing Linux feature to run benchmarks!
https://haypo.github.io/journey-to-stable-benchmark-system.html

I started to write advices to get stable benchmarks:
https://github.com/python/performance#how-to-get-stable-benchmarks

Note: See also the https://mail.python.org/mailman/listinfo/speed mailing list ;-)

----------
resolution: fixed -> 
status: closed -> open

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue15369>
_______________________________________


More information about the docs mailing list