[Speed] perf 0.9.6 released

Victor Stinner victor.stinner at gmail.com
Wed Mar 15 21:27:25 EDT 2017


I updated performance for perf 0.9.6. I patched python_startup and
hg_startup benchmarks to use the new bench_command() method.

This new method uses the following Python script to measure the time
to execute a command:
https://github.com/haypo/perf/blob/master/perf/_process_time.py

I wrote the _process_time.py script to be small and simple to reduce
the overhead of the benchmark itself. It's similar to the "real time"
line of UNIX 'time' command, but it works on Windows too.

I chose to use time.perf_counter(), wall clock, instead of using
getrusage() which provides CPU time. It's easy for me to understand
wall clock time rather than CPU time, and it's more consistent with
other perf methods.

Victor

2017-03-16 1:59 GMT+01:00 Victor Stinner <victor.stinner at gmail.com>:
> Hi,
>
> I released perf 0.9.6 with many changes. First, "Mean +- std dev" is
> now displayed, instead of "Median +- std dev", as a result of the
> previous thread on this list. The median is still accessible via the
> stats command. By the way, the "stats" command now displays "Median +-
> MAD" instead of "Median +- std dev".
>
> I broke the API to fix an old mistake. I used the term "sample" for a
> single value, whereas a "sample" in statistics is a set of values (one
> or more), and so the term is misused. I replace "sample" with "value"
> and "samples" with "values" everywhere in perf.
>
> http://perf.readthedocs.io/en/latest/changelog.html#version-0-9-6-2017-03-15
>
> Version 0.9.6 (2017-03-15)
> --------------------------
>
> Major change:
>
> * Display ``Mean +- std dev`` instead of ``Median +- std dev``
>
> Enhancements:
>
> * Add a new ``Runner.bench_command()`` method to measure the execution time of
>   a command.
> * Add ``mean()``, ``median_abs_dev()`` and ``stdev()`` methods to ``Benchmark``
> * ``check`` command: test also minimum and maximum compared to the mean
>
> Major API change, rename "sample" to "value":
>
> * Rename attributes and methods:
>
>   - ``Benchmark.bench_sample_func()`` => ``Benchmark.bench_time_func()``.
>   - ``Run.samples`` => ``Run.values``
>   - ``Benchmark.get_samples()`` => ``Benchmark.get_values()``
>   - ``get_nsample()`` => ``get_nvalue()``
>   - ``Benchmark.format_sample()`` => ``Benchmark.format_value()``
>   - ``Benchmark.format_samples()`` => ``Benchmark.format_values()``
>
> * Rename Runner command line options:
>
>   - ``--samples`` => ``--values``
>   - ``--debug-single-sample`` => ``--debug-single-value``
>
> Changes:
>
> * ``convert``: Remove ``--remove-outliers`` option
> * ``check`` command now tests stdev/mean, instead of testing stdev/median
> * setup.py: statistics dependency is now installed using ``extras_require`` to
>   support setuptools 18 and newer
> * Add setup.cfg to enable universal builds: same wheel package for Python 2
>   and Python 3
> * Add ``perf.VERSION`` constant: tuple of int
> * JSON version 6: write metadata common to all benchmarks (common to all runs
>   of all benchmarks) at the root; rename 'samples' to 'values' in runs.
>
> Victor


More information about the Speed mailing list