Re: [Speed] performance 0.5.5 and perf 1.3 released
On Mon, 29 May 2017 18:49:37 +0200 Victor Stinner victor.stinner@gmail.com wrote:
- The
float
benchmark now uses__slots__
on thePoint
class.
So the benchmark numbers are not comparable with previously generated ones?
- Remove the following microbenchmarks. They have been moved to the
pymicrobench <https://github.com/haypo/pymicrobench>
_ project because they are too short, not representative of real applications and are too unstable.
[...]
logging_silent
: values are faster than 1 ns on PyPy with 2^27 loops! (and around 0.7 us on CPython)
The performance of silent logging calls is actually important for all applications which have debug() calls in their critical paths. This is quite common in network and/or distributed programming where you want to allow logging many events for diagnosis of unexpected runtime issues (because many unexpected conditions can appear), but with those logs disabled by default for performance and readability reasons.
This is no more a micro-benchmark than is, say, pickling or JSON encoding; and much less so than solving the N-body problem in pure Python without Numpy...
Update requirements
- Django: 1.11 => 1.11.1
- SQLAlchemy: 1.1.9 => 1.1.10
- certifi: 2017.1.23 => 2017.4.17
- perf: 1.2 => 1.3
- mercurial: 4.1.2 => 4.2
- tornado: 4.4.3 => 4.5.1
Are those requirements for the benchmark runner or for the benchmarks themselves? If the latter, won't updating the requirements make benchmark numbers non-comparable with those generated by previous versions? This is something that the previous benchmarks suite tried to above by using pinned versions of 3rd party libraries.
Regards
Antoine.
participants (2)
-
Antoine Pitrou
-
Victor Stinner