Re: [Speed] performance 0.5.5 and perf 1.3 released
I thought Piston had been shuttered by Dropbox, so I wouldn't worry about convincing them to change speed.pyston.org.
On Mon, May 29, 2017, 13:57 Victor Stinner, victor.stinner@gmail.com wrote:
2017-05-29 22:45 GMT+02:00 Antoine Pitrou solipsis@pitrou.net:
I don't know. It means that benchmark results published on the Web are generally not comparable with each other unless they happen to be generated with the exact same version. It reduces the usefulness of the benchmarks suite quite a bit IMHO.
I only know a 3 websites to compare Python performances:
- speed.python.org
- speed.pypy.org
- speed.pyston.org
My goal is to convaince PyPy developers to use performance. I'm not sure that pyston.org is revelant: it seems like their forked benchmark suite is modified, so I don't expect that results on pypy.org and pyston.org are comparable. I would also prefer that Pyston uses the same benchmark suite.
About speed.python.org, what was decided is to *drop* all previous results if we modify benchmarks. That's what I already did 3 times:
- 2017-03-31: old results removed, new CPython results to use Git commits instead of Mercurial.
- 2017-01: old results computed without PGO removed (unstable because of code placement), new CPython results using PGO
- 2016-11-04: old results computed with benchmarks removed, new CPython results (using LTO but not PGO) computed with the new performance benchmark suite.
To be honest, in the meanwhile, I chose to run the master branch of perf and performance to develop perf and performance. In practice, I never noticed any significant performance change on any performance the last 12 months when I updated dependencies. Sadly, it seems like no significant optimization was merged in our dependencies.
Let's ask the question a different way: was there any necessity to update those dependencies? If yes, then fair enough. Otherwise, the compatibility breakage is gratuitous.
When I started to work on benchmarks last year, I noticed that we used a Mercurial version which was 5 years old, and a Django version which was something like 3 years old. I would like to benchmark the Mercurial and Django versions deployed on production.
Why do you want to update performance if you want a pinned version of Django? Just always use the same performance version, no?
For speed.python.org, maybe we can decide that we always use a fixed version of performance, and that we must remove all data each time we change the performance version. For my needs, maybe we could spawn a "beta" subdomain running master branches? Again, I expect no significant difference between the main website and the beta website. But we can do it if you want.
Victor
Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed
participants (1)
-
Brett Cannon