Micro-benchmarks for function calls (PEP 576/579/580)
Here is an initial version of a micro-benchmark for C function calling: https://github.com/jdemeyer/callbench I don't have results yet, since I'm struggling to find the right options to "perf timeit" to get a stable result. If somebody knows how to do this, help is welcome. Jeroen.
On Tue, Jul 10, 2018 at 7:23 AM Jeroen Demeyer
Here is an initial version of a micro-benchmark for C function calling:
https://github.com/jdemeyer/callbench
I don't have results yet, since I'm struggling to find the right options to "perf timeit" to get a stable result. If somebody knows how to do this, help is welcome.
I suggest `--duplicate 10` option. While it is good for start point, please don't forget we need "application" benchmark. Even if some function call overhead can be 3x faster, if it takes only 3% of application execution time, total execution time only 1% faster. It's too small to accept PEP 580 complexity. Realistic application benchmark demonstrates not only "how much faster", but also "how important it is". Regards,
Jeroen. _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com
--
INADA Naoki
The pyperformance benchmark suite had micro benchmarks on function
calls, but I removed them because they were sending the wrong signal.
A function call by itself doesn't matter to compare two versions of
CPython, or CPython to PyPy. It's also very hard to measure the cost
of a function call when you are using a JIT compiler which is able to
inline the code into the caller... So I removed all these stupid
"micro benchmarks" to a dedicated Git repository:
https://github.com/vstinner/pymicrobench
Sometimes, I add new micro benchmarks when I work on one specific
micro optimization.
But more generally, I suggest you to not run micro benchmarks and
avoid micro optimizations :-)
Victor
2018-07-10 0:20 GMT+02:00 Jeroen Demeyer
Here is an initial version of a micro-benchmark for C function calling:
https://github.com/jdemeyer/callbench
I don't have results yet, since I'm struggling to find the right options to "perf timeit" to get a stable result. If somebody knows how to do this, help is welcome.
Jeroen. _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
May be is something obvious but I find myself forgetting often about
the fact that most modern CPUs can change speed (and energy consumption)
depending on a moving average of CPU load.
If you don't disable this "green" feature and the benchmarks are quick then
the
result can have huge variations depending on exactly when and if the CPU
switches to fast mode.
On Wed, Jul 11, 2018 at 12:53 AM Victor Stinner
The pyperformance benchmark suite had micro benchmarks on function calls, but I removed them because they were sending the wrong signal. A function call by itself doesn't matter to compare two versions of CPython, or CPython to PyPy. It's also very hard to measure the cost of a function call when you are using a JIT compiler which is able to inline the code into the caller... So I removed all these stupid "micro benchmarks" to a dedicated Git repository: https://github.com/vstinner/pymicrobench
Sometimes, I add new micro benchmarks when I work on one specific micro optimization.
But more generally, I suggest you to not run micro benchmarks and avoid micro optimizations :-)
Victor
2018-07-10 0:20 GMT+02:00 Jeroen Demeyer
: Here is an initial version of a micro-benchmark for C function calling:
https://github.com/jdemeyer/callbench
I don't have results yet, since I'm struggling to find the right options to "perf timeit" to get a stable result. If somebody knows how to do this, help is welcome.
Jeroen. _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/agriff%40tin.it
2018-07-11 9:19 GMT+02:00 Andrea Griffini
May be is something obvious but I find myself forgetting often about the fact that most modern CPUs can change speed (and energy consumption) depending on a moving average of CPU load.
If you don't disable this "green" feature and the benchmarks are quick then the result can have huge variations depending on exactly when and if the CPU switches to fast mode.
If you use "sudo python3 -m perf system tune": Turbo Boost is disabled and the CPU frequency is fixed. More into at: http://perf.readthedocs.io/en/latest/system.html Victor
participants (4)
-
Andrea Griffini
-
INADA Naoki
-
Jeroen Demeyer
-
Victor Stinner