I just released perf 0.3. Major changes:
compare_to. Compare commands says if the difference is significant (I copied the code from perf.py)
arguments and more
of the worker processes to isolated CPUs
--json-filecommand line option
responsible to measure the elapsed time, useful for microbenchmarks
Writing a benchmark now only takes one line: "perf.text_runner.TextRunner().bench_func(func)"! Full example:
import time import perf.text_runner
def func(): time.sleep(0.001)
I looked at PyPy benchmarks: https://bitbucket.org/pypy/benchmarks
Results can also be serialized to JSON, but the serialization is only done at the end: the final result is serialized. It's not possible to save each run in a JSON file.
Running multiple processes is not supported neither.
With perf, the final JSON contains all data: all runs, all samples even warmup samples.
perf now also collects metadata in each worker process. So it is more safer to compare runs since it's possible to manually check when and how the worker executed the benchmark. For example, the CPU affinity is now saved in metadata.
For example, "python -m perf.timeit" now saves the setup and statements in metadata.
With perf 0.3, TextRunner now also includes a builtin calibration to compute the number of outter loop iteartions: repeat each sample so it takes between 100 ms and 1 sec (min/max are configurable).