On Mon, 16 Nov 2015 at 12:24 Maciej Fijalkowski fijall@gmail.com wrote:
Hi Brett
Any thoughts on improving the benchmark set (I think all of {cpython,pypy,pyston} introduced new benchmarks to the set).
We should probably start a mailing list and finally hash out a common set of benchmarks that we all agree are reasonable for measuring performance. I think we all generally agree that high-level benchmarks are good and micro-benchmarks aren't that important for cross-implementation comparisons (obviously they have a usefulness when trying to work on a specific feature set, but that should be considered specific to an implementation and not to some globally accepted set of benchmarks). So we should identify benchmarks which somewhat represent real world workloads and try to have a balanced representation that doesn't lean one way or another (e.g., not all string manipulation or scientific computing, but both should obviously be represented).
"speed.python.org" becoming a thing is generally stopped on "noone cares enough to set it up".
Oh, I know. I didn't say this could be considered wishful thinking since I know I have enough on my plate to prevent me from making it happen.
-Brett
Cheers, fijal
On Mon, Nov 16, 2015 at 9:18 PM, Brett Cannon brett@python.org wrote:
I gave the opening keynote at PyCon CA and then gave the same talk at
PyData
NYC on the various interpreters of Python (Jupyter notebook of my presentation can be found at bit.ly/pycon-ca-keynote; no video yet). I figured people here might find the benchmark numbers interesting so I'm sharing the link here.
I'm still hoping someday speed.python.org becomes a thing so I never
have to
spend so much time benchmarking so may Python implementations ever again
and
this sort of thing is just part of what we do to keep the implementation ecosystem healthy.
Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com