I thought Piston had been shuttered by Dropbox, so I wouldn't worry about convincing them to change <a href="http://speed.pyston.org">speed.pyston.org</a>.<br><br><div class="gmail_quote"><div dir="ltr">On Mon, May 29, 2017, 13:57 Victor Stinner, <<a href="mailto:victor.stinner@gmail.com">victor.stinner@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">2017-05-29 22:45 GMT+02:00 Antoine Pitrou <<a href="mailto:solipsis@pitrou.net" target="_blank">solipsis@pitrou.net</a>>:<br>
> I don't know. It means that benchmark results published on the Web<br>
> are generally not comparable with each other unless they happen to be<br>
> generated with the exact same version. It reduces the usefulness of<br>
> the benchmarks suite quite a bit IMHO.<br>
<br>
I only know a 3 websites to compare Python performances:<br>
<br>
* <a href="http://speed.python.org" rel="noreferrer" target="_blank">speed.python.org</a><br>
* <a href="http://speed.pypy.org" rel="noreferrer" target="_blank">speed.pypy.org</a><br>
* <a href="http://speed.pyston.org" rel="noreferrer" target="_blank">speed.pyston.org</a><br>
<br>
My goal is to convaince PyPy developers to use performance. I'm not<br>
sure that <a href="http://pyston.org" rel="noreferrer" target="_blank">pyston.org</a> is revelant: it seems like their forked benchmark<br>
suite is modified, so I don't expect that results on <a href="http://pypy.org" rel="noreferrer" target="_blank">pypy.org</a> and<br>
<a href="http://pyston.org" rel="noreferrer" target="_blank">pyston.org</a> are comparable. I would also prefer that Pyston uses the<br>
same benchmark suite.<br>
<br>
About <a href="http://speed.python.org" rel="noreferrer" target="_blank">speed.python.org</a>, what was decided is to *drop* all previous<br>
results if we modify benchmarks. That's what I already did 3 times:<br>
<br>
* 2017-03-31: old results removed, new CPython results to use Git<br>
commits instead of Mercurial.<br>
* 2017-01: old results computed without PGO removed (unstable because<br>
of code placement), new CPython results using PGO<br>
* 2016-11-04: old results computed with benchmarks removed, new<br>
CPython results (using LTO but not PGO) computed with the new<br>
performance benchmark suite.<br>
<br>
To be honest, in the meanwhile, I chose to run the master branch of<br>
perf and performance to develop perf and performance. In practice, I<br>
never noticed any significant performance change on any performance<br>
the last 12 months when I updated dependencies. Sadly, it seems like<br>
no significant optimization was merged in our dependencies.<br>
<br>
> Let's ask the question a different way: was there any necessity to<br>
> update those dependencies? If yes, then fair enough. Otherwise, the<br>
> compatibility breakage is gratuitous.<br>
<br>
When I started to work on benchmarks last year, I noticed that we used<br>
a Mercurial version which was 5 years old, and a Django version which<br>
was something like 3 years old. I would like to benchmark the<br>
Mercurial and Django versions deployed on production.<br>
<br>
Why do you want to update performance if you want a pinned version of<br>
Django? Just always use the same performance version, no?<br>
<br>
For <a href="http://speed.python.org" rel="noreferrer" target="_blank">speed.python.org</a>, maybe we can decide that we always use a fixed<br>
version of performance, and that we must remove all data each time we<br>
change the performance version. For my needs, maybe we could spawn a<br>
"beta" subdomain running master branches? Again, I expect no<br>
significant difference between the main website and the beta website.<br>
But we can do it if you want.<br>
<br>
Victor<br>
_______________________________________________<br>
Speed mailing list<br>
<a href="mailto:Speed@python.org" target="_blank">Speed@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/speed" rel="noreferrer" target="_blank">https://mail.python.org/mailman/listinfo/speed</a><br>
</blockquote></div>