After 9 months of development, the perf API became stable with the awaited "1.0" version. The perf module has now a complete API to write, run and analyze benchmarks and a nice documentation explaining traps of benchmarking and how to avoid, or even, fix them.
Last days, I rewrote the documentation, hid a few more functions to prevent API changes after the 1.0 release, and I made last backward incompatible changes to fix old design issues.
I don't expect the module to be perfect. It's more a milestone to freeze the API and focus on features instead ;-)
Changes between 0.9.6 and 1.0:
statscommand now displays percentiles
histcommand now also checks the benchmark stability by default
- dump command now displays raw value of calibration runs.
Backward incompatible changes:
- Remove the
comparecommand to only keep the
compare_tocommand which is better defined
- Run warmup values must now be normalized per loop iteration.
__str__()methods from Benchmark. These methods were too opiniated.
perf.monotonic_clock()since it wasn't monotonic on Python 2.7.
is_significant()from the public API
- check command now only complains if min/max is 50% smaller/larger than the mean, instead of 25%.
Note: I already updated the performance project to perf 1.0.