If you are not subscribed to the Python-Dev mailing list, here is the
copy of the email I just sent.
---------- Forwarded message ----------
From: Victor Stinner <victor.stinner(a)gmail.com>
Date: 2016-10-20 12:56 GMT+02:00
Subject: Benchmarking Python and micro-optimizations
To: Python Dev <Python-Dev(a)python.org>
Last months, I worked a lot on benchmarks. I ran benchmarks, analyzed
results in depth (up to the hardware and kernel drivers!), I wrote new
tools and enhanced existing tools.
* I wrote a new perf module which runs benchmarks in a reliable way
and contains a LOT of features: collect metadata, JSON file format,
commands to compare, render an histogram, etc.
* I rewrote the Python benchmark suite: the old benchmarks Mercurial
repository moved to a new performance GitHub project which uses my
perf module and contains more benchmarks.
* I also made minor enhancements to timeit in Python 3.7 -- some dev
don't want major changes to not "break the backward compatibility".
For timeit, I suggest to use my perf tool which includes a reliable
timeit command and has much more features like --duplicate (repeat the
statements to reduce the cost of the outer loop) and --compare-to
(compare two versions of Python), but also all builtin perf features
(JSON output, statistics, histogram, etc.).
I added benchmarks from PyPy and Pyston benchmark suites to
performance: performance 0.3.1 contains 51 benchmark scripts which run
a total of 121 benchmarks. Example of tested Python modules:
* Dulwich (full Git implementation in Python)
* Mercurial (currently only the startup time)
* pyaes (AES crypto cipher in pure Python)
* Tornado (HTTP client and server)
* Django (sadly, only the template engine right now, Pyston contains
More benchmarks will be added later. It would be nice to add
benchmarks on numpy for example, numpy is important for a large part
of our community.
All these (new or updated) tools can now be used to take smarter
decisions on optimizations. Please don't push any optimization anymore
without providing reliable benchmark results!
My first major action was to close the latest attempt to
micro-optimize int+int in Python/ceval.c,
http://bugs.python.org/issue21955 : I closed the issue as rejected,
because there is no significant speedup on benchmarks other than two
(tiny) microbenchmarks. To make sure that no one looses its time on
trying to micro-optimize int+int, I even added a comment to
"Please don't try to micro-optimize int+int"
The perf and performance are now well tested: Travis CI runs tests on
the new commits and pull requests, and the "tox" command can be used
locally to test different Python versions, pep8, doc, ... in a single
* Run performance 0.3.1 on speed.python.org: the benchmark runner is
currently stopped (and still uses the old benchmarks project). The
website part may be updated to allow to download full JSON files which
includes *all* information (all timings, metadata and more).
* I plan to run performance on CPython 2.7, CPython 3.7, PyPy and PyPy
3. Maybe also CPython 3.5 and CPython 3.6 if they don't take too much
* Later, we can consider adding more implementations of Python:
Jython, IronPython, MicroPython, Pyston, Pyjion, etc. All benchmarks
should be run on the same hardware to be comparable.
* Later, we might also allow other projects to upload their own
benchmark results, but we should find a solution to groups benchmark
results per benchmark runner (ex: at least by the hostname, perf JSON
contains the hostname) to not compare two results from two different
* We should continue to add more benchmarks to the performance
benchmark suite, especially benchmarks more representative of real
applications (we have enough microbenchmarks!)
* perf: http://perf.readthedocs.io/
* performance: https://github.com/python/performance
* Python Speed mailing list: https://mail.python.org/mailman/listinfo/speed
* https://speed.python.org/ (currently outdated, and don't use performance yet)
See https://pypi.python.org/pypi/performance which contains even more
links to Python benchmarks (PyPy, Pyston, Numba, Pythran, etc.)
1) perf supports multiple benchmarks per script
2) perf calibrates the benchmark in a dedicated process
3) new --duplication option to perf timeit
I solved an old limitaton of my perf module: since perf 0.8, it's now
possible to run multiple benchmarks in a simple script. Example of
runner = perf.Runner()
runner.bench_func('dict.get', dict_get, dico, keys)
runner.bench_func('try/except', try_except, dico, keys)
The runner spawns N process for the first benchmark + N process for
the second benchmark. The trick is to pass an option --worker-task
option to the worker process. In the worker, bench_func() does nothing
(return None) if the "worker task" counter doesn't match.
I rewrote the API in perf 0.8 to simpliy it and fix design issues. It
should be the latest large API change before perf 1.0 which is
expected before the end of the year.
Simple change but important one to enhance a little bit more the
reliability of benchmarks, especially on Python implementations having
a JIT: benchmark calibration is now done in a dedicated process.
All worker processes (computing samples) should now run exactly the
same workload. Before, the first worker was different, it ran a few
more iterations, because of the calibration.
Another simple but useful change for the shortest microbenchmarks: I
added a --duplicate option to timeit. Example:
$ python3 -m perf timeit -s 'x=1;y=2' 'x+y'
Median +- std dev: 26.7 ns +- 2.0 ns
$ python3 -m perf timeit -s 'x=1;y=2' 'x+y' --duplicate=1000
Median +- std dev: 19.2 ns +- 0.4 ns
Duplicating the statement 1000x reduces the cost of the outer loop by 28%.
FYI I wrote this feature to help me to take a decision on the old and
dummy "1+1" optimization for Python 3:
https://bugs.python.org/issue21955 (issue open since July 2014).
(Spoiler: I plan to collect benchmark results to explain that the
micro optimization is useful, but I'm not 100% sure that it's useless
right now :-))
I didn't copy/paste code from PyPy benchmaks directly. I updated 3rd
party dependencies, I updated the code to use the perf API, and
sometimes I even fixed bugs in the benchmarks.
2016-10-11 2:22 GMT+02:00 Victor Stinner <victor.stinner(a)gmail.com>:
> * Add ``sqlalchemy_declarative`` and ``sqlalchemy_imperative`` benchmarks:
> SQLAlchemy Declarative and Imperative benchmarks using SQLite. Add
> ``SQLAlchemy`` dependency.
For these two new benchmarks, it's unclear to me if the purpose of the
benchmark is to test INSERT, SELECT or INSERT+SELECT.
Currently, the benchmark test INSERT+SELECT.
Compared to the PyPy benchmark, the benchmark now drops all rows of
the tables before each run to get more reproductible timings.
I just released performance 0.3, the Python benchmark suite, with 10
new benchmarks from the PyPy benchmark suite:
Version 0.3.0 changelog.
* Add ``crypto_pyaes``: Benchmark a pure-Python implementation of the AES
block-cipher in CTR mode using the pyaes module (version 1.6.0). Add
* Add ``sympy``: Benchmark on SymPy. Add ``scipy`` dependency.
* Add ``scimark`` benchmark
* Add ``deltablue``: DeltaBlue benchmark
* Add ``dulwich_log``: Iterate on commits of the asyncio Git repository using
the Dulwich module. Add ``dulwich`` (and ``mpmath``) dependencies.
* Add ``pyflate``: Pyflate benchmark, tar/bzip2 decompressor in pure
* Add ``sqlite_synth`` benchmark: Benchmark Python aggregate for SQLite
* Add ``genshi`` benchmark: Render template to XML or plain text using the
Genshi module. Add ``Genshi`` dependency.
* Add ``sqlalchemy_declarative`` and ``sqlalchemy_imperative`` benchmarks:
SQLAlchemy Declarative and Imperative benchmarks using SQLite. Add
* ``compare`` command now fails if the performance versions are different
* ``nbody``: add ``--reference`` and ``--iterations`` command line options.
* ``chaos``: add ``--width``, ``--height``, ``--thickness``, ``--filename``
and ``--rng-seed`` command line options
* ``django_template``: add ``--table-size`` command line option
* ``json_dumps``: add ``--cases`` command line option
* ``pidigits``: add ``--digits`` command line option
* ``raytrace``: add ``--width``, ``--height`` and ``--filename`` command line
* Port ``html5lib`` benchmark to Python 3
* Enable ``pickle_pure_python`` and ``unpickle_pure_python`` on Python 3
(code was already compatible with Python 3)
* Creating the virtual environment doesn't inherit environment variables
(especially ``PYTHONPATH``) by default anymore: ``--inherit-environ``
command line option must now be used explicitly.
* ``chaos`` benchmark now also reset the ``random`` module at each sample
to get more reproductible benchmark results
* Logging benchmarks now truncate the in-memory stream before each benchmark
* Rename benchmarks to get a consistent name between the command line and
benchmark name in the JSON file.
* Rename pickle benchmarks:
- ``slowpickle`` becomes ``pickle_pure_python``
- ``slowunpickle`` becomes ``unpickle_pure_python``
- ``fastpickle`` becomes ``pickle``
- ``fastunpickle`` becomes ``unpickle``
* Rename ElementTree benchmarks: replace ``etree_`` prefix with
* Rename ``hexiom2`` to ``hexiom_level25`` and explicitly pass ``--level=25``
* Rename ``json_load`` to ``json_loads``
* Rename ``json_dump_v2`` to ``json_dumps`` (and remove the deprecated
* Rename ``normal_startup`` to ``python_startup``, and ``startup_nosite``
* Rename ``threaded_count`` to ``threading_threaded_count``,
rename ``iterative_count`` to ``threading_iterative_count``
* Rename logging benchmarks:
- ``silent_logging`` to ``logging_silent``
- ``simple_logging`` to ``logging_simple``
- ``formatted_logging`` to ``logging_format``
* Update dependencies
* Remove broken ``--args`` command line option.