Hi python-dev,
I've seen that on and off CPython had attempts to measure benchmarks over time to avoid performance regressions (i.e.:
https://speed.python.org), but had nothing concrete so far, so, I ended up creating a hosted service for that (
https://www.speedtin.com) and I'd like to help in setting up a structure to run the benchmarks from
https://hg.python.org/benchmarks/ and properly upload them to SpeedTin (if CPython devs are Ok with that) -- note that I don't really have server to run the benchmarks, only to host the data (but
https://speed.python.org seems to indicate that such a server is available...).
Later on, the idea is being able to compare across different Python implementations which use the same benchmark set... (although that needs other implementations to also post to the data to SpeedTin).
Note that uploading the data to SpeedTin should be pretty straightforward (by using
https://github.com/fabioz/pyspeedtin, so, the main issue would be setting up o machine to run the benchmarks).
Best Regards,
Fabio