[Speed] perf 1.0 released: with a stable API

Victor Stinner victor.stinner at gmail.com
Thu Mar 16 22:07:35 EDT 2017


Hi,

After 9 months of development, the perf API became stable with the
awaited "1.0" version. The perf module has now a complete API to
write, run and analyze benchmarks and a nice documentation explaining
traps of benchmarking and how to avoid, or even, fix them.

http://perf.readthedocs.io/

Last days, I rewrote the documentation, hid a few more functions to
prevent API changes after the 1.0 release, and I made last backward
incompatible changes to fix old design issues.

I don't expect the module to be perfect. It's more a milestone to
freeze the API and focus on features instead ;-)

Changes between 0.9.6 and 1.0:

Enhancements:

* ``stats`` command now displays percentiles
* ``hist`` command now also checks the benchmark stability by default
* dump command now displays raw value of calibration runs.
* Add ``Benchmark.percentile()`` method

Backward incompatible changes:

* Remove the ``compare`` command to only keep the ``compare_to`` command
  which is better defined
* Run warmup values must now be normalized per loop iteration.
* Remove ``format()`` and ``__str__()`` methods from Benchmark. These methods
  were too opiniated.
* Rename ``--name=NAME`` option to ``--benchmark=NAME``
* Remove ``perf.monotonic_clock()`` since it wasn't monotonic on Python 2.7.
* Remove ``is_significant()`` from the public API

Other changes:

* check command now only complains if min/max is 50% smaller/larger than
  the mean, instead of 25%.


Note: I already updated the performance project to perf 1.0.

Victor


More information about the Speed mailing list