16 Mar
2017
16 Mar
'17
7:07 p.m.
Hi,
After 9 months of development, the perf API became stable with the awaited "1.0" version. The perf module has now a complete API to write, run and analyze benchmarks and a nice documentation explaining traps of benchmarking and how to avoid, or even, fix them.
Last days, I rewrote the documentation, hid a few more functions to prevent API changes after the 1.0 release, and I made last backward incompatible changes to fix old design issues.
I don't expect the module to be perfect. It's more a milestone to freeze the API and focus on features instead ;-)
Changes between 0.9.6 and 1.0:
Enhancements:
stats
command now displays percentileshist
command now also checks the benchmark stability by default- dump command now displays raw value of calibration runs.
- Add
Benchmark.percentile()
method
Backward incompatible changes:
- Remove the
compare
command to only keep thecompare_to
command which is better defined - Run warmup values must now be normalized per loop iteration.
- Remove
format()
and__str__()
methods from Benchmark. These methods were too opiniated. - Rename
--name=NAME
option to--benchmark=NAME
- Remove
perf.monotonic_clock()
since it wasn't monotonic on Python 2.7. - Remove
is_significant()
from the public API
Other changes:
- check command now only complains if min/max is 50% smaller/larger than the mean, instead of 25%.
Note: I already updated the performance project to perf 1.0.
Victor
2793
Age (days ago)
2793
Last active (days ago)
0 comments
1 participants
participants (1)
-
Victor Stinner