Avoiding CPython performance regressions

Hi python-dev, I've seen that on and off CPython had attempts to measure benchmarks over time to avoid performance regressions (i.e.: https://speed.python.org), but had nothing concrete so far, so, I ended up creating a hosted service for that (https://www.speedtin.com) and I'd like to help in setting up a structure to run the benchmarks from https://hg.python.org/benchmarks/ and properly upload them to SpeedTin (if CPython devs are Ok with that) -- note that I don't really have server to run the benchmarks, only to host the data (but https://speed.python.org seems to indicate that such a server is available...). There's a sample report at: https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time/ (it has real data from running using the PyPy benchmarks as I only discovered about the benchmarks from https://hg.python.org/benchmarks/ later on -- also, it doesn't seem to support Python 3 right now, so, it's probably not that useful for the current Python dev, but it does have some nice insight on CPython 2.7.x performance over time). Later on, the idea is being able to compare across different Python implementations which use the same benchmark set... (although that needs other implementations to also post to the data to SpeedTin). Note that uploading the data to SpeedTin should be pretty straightforward (by using https://github.com/fabioz/pyspeedtin, so, the main issue would be setting up o machine to run the benchmarks). Best Regards, Fabio

On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny <fabiofz@gmail.com> wrote:
Thanks, but Zach almost has this working using codespeed (he's still waiting on a review from infrastructure, I think). The server was not in fact running; a large part of what Zach did was to get that server set up. I don't know what it would take to export the data to another consumer, but if you want to work on that I'm guessing there would be no objection. And I'm sure there would be no objection if you want to get involved in maintaining the benchmark server! There's also an Intel project posted about here recently that checks individual benchmarks for performance regressions and posts the results to python-checkins. --David

On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" <python-dev-bounces+david.c.stewart=intel.com@python.org on behalf of rdmurray@bitdance.com> wrote:
The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ Dave

On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C <david.c.stewart@intel.com
wrote:
Hi Dave, Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). -- Fabio

From: Fabio Zadrozny <fabiofz@gmail.com<mailto:fabiofz@gmail.com>> Date: Tuesday, December 1, 2015 at 1:36 AM To: David Stewart <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> Cc: "R. David Murray" <rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>>, "python-dev@python.org<mailto:python-dev@python.org>" <python-dev@python.org<mailto:python-dev@python.org>> Subject: Re: [Python-Dev] Avoiding CPython performance regressions On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> wrote: On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" <python-dev-bounces+david.c.stewart=intel.com@python.org<mailto:intel.com@python.org> on behalf of rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>> wrote:
The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ Hi Dave, Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). Fabio – my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. Dave

On 12/1/15, 7:26 AM, "Python-Dev on behalf of Stewart, David C" <python-dev-bounces+david.c.stewart=intel.com@python.org on behalf of david.c.stewart@intel.com> wrote:
Fabio – my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied).
Whoops - silly me - today is a national holiday in Romania where Stefan lives so might not get an answer until tomorrow. :-/

On 12/1/15, 10:56 AM, "Maciej Fijalkowski" <fijall@gmail.com> wrote:
Hi David.
Any reason you run a tiny tiny subset of benchmarks?
We could always run more. There are so many in the full set in https://hg.python.org/benchmarks/ with such divergent results that it seems hard to see the forest because there are so many trees. I'm more interested in gradually adding to the set rather than the huge blast of all of them in daily email. Would you disagree? Part of the reason that I monitor ssbench so closely on Python 2 is that Swift is a major element in cloud computing (and OpenStack in particular) and has ~70% of its cycles in Python. We are really interested in workloads which are representative of the way Python is used by a lot of people and which produce repeatable results. (and which are open source). Do you have a suggestions? Dave

On Tue, Dec 1, 2015 at 9:04 PM, Stewart, David C <david.c.stewart@intel.com> wrote:
Last time I checked, Swift was quite a bit faster under pypy :-)
We are really interested in workloads which are representative of the way Python is used by a lot of people and which produce repeatable results. (and which are open source). Do you have a suggestions?
You know our benchmark suite (https://bitbucket.org/pypy/benchmarks), we're gradually incorporating what people report. That means that (Typically) it'll be open source library benchmarks, if they get to the point of writing some. I have for example coming django ORM benchmark, can show you if you want. I don't think there is a "representative benchmark" or maybe even "representative set", also because open source code tends to be higher quality and less spaghetti-like than closed source code that I've seen, but we're adding and adding. Cheers, fijal

Hi Fabio, Let me know if you have any questions related to the Python benchmarks run nightly in Intel’s 0-Day Lab. Thanks, Stefan From: "Stewart, David C" <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> Date: Tuesday 1 December 2015 at 17:26 To: Fabio Zadrozny <fabiofz@gmail.com<mailto:fabiofz@gmail.com>> Cc: "R. David Murray" <rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>>, "python-dev@python.org<mailto:python-dev@python.org>" <python-dev@python.org<mailto:python-dev@python.org>>, Stefan A Popa <stefan.a.popa@intel.com<mailto:stefan.a.popa@intel.com>> Subject: Re: [Python-Dev] Avoiding CPython performance regressions From: Fabio Zadrozny <fabiofz@gmail.com<mailto:fabiofz@gmail.com>> Date: Tuesday, December 1, 2015 at 1:36 AM To: David Stewart <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> Cc: "R. David Murray" <rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>>, "python-dev@python.org<mailto:python-dev@python.org>" <python-dev@python.org<mailto:python-dev@python.org>> Subject: Re: [Python-Dev] Avoiding CPython performance regressions On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> wrote: On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" <python-dev-bounces+david.c.stewart=intel.com@python.org<mailto:intel.com@python.org> on behalf of rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>> wrote:
The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ Hi Dave, Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). Fabio – my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. Dave

Hi Thanks for doing the work! I'm on of the pypy devs and I'm very interested in seeing this getting somewhere. I must say I struggle to read the graph - is red good or is red bad for example? I'm keen to help you getting anything you want to run it repeatedly. PS. The intel stuff runs one benchmark in a very questionable manner, so let's maybe not rely on it too much. On Mon, Nov 30, 2015 at 3:52 PM, R. David Murray <rdmurray@bitdance.com> wrote:

On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski <fijall@gmail.com> wrote:
Hi Maciej, Great, it'd be awesome having data on multiple Python VMs (my latest target is really having a way to compare across multiple VMs/versions easily and help each implementation keep a focus on performance). Ideally, a single, dedicated machine could be used just to run the benchmarks from multiple VMs (one less variable to take into account for comparisons later on, as I'm not sure it'd be reliable to normalize benchmark data from different machines -- it seems Zach was the one to contact from that, but if there's such a machine already being used to run PyPy, maybe it could be extended to run other VMs too?). As for the graph, it should be easy to customize (and I'm open to suggestions). In the case, as it is, red is slower and blue is faster (so, for instance in https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time, the fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline). I've updated the comments to make it clearer (and changed the second graph to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for the individual benchmarks. Best Regards, Fabio

On Tue, Dec 1, 2015 at 11:49 AM, Fabio Zadrozny <fabiofz@gmail.com> wrote:
There is definitely a machine available. I suggest you ask python-infra list for access. It definitely can be used to run more than just pypy stuff. As for normalizing across multiple machines - don't even bother. Different architectures make A LOT of difference, especially with cache sizes and whatnot, that seems to have different impact on different loads. As for graph - I like the split on the benchmarks and a better description (higher is better) would be good. I have a lot of ideas about visualizations, pop in on IRC, I'm happy to discuss :-) Cheers, fijal

On Tue, Dec 1, 2015 at 8:14 AM, Maciej Fijalkowski <fijall@gmail.com> wrote:
Ok, I mailed infrastructure(at)python.org to see how to make it work. I did add a legend now, so, it should be much easier to read already ;) As for ideas on visualizations, I definitely want to hear about suggestions on how to improve it, although I'll start focusing on having the servers to get benchmark data running and will move on to improving the graphs right afterwards. Cheers, Fabio
Cheers, fijal

On Tue, Dec 1, 2015 at 9:35 AM, Victor Stinner <victor.stinner@gmail.com> wrote:
Humm, I understand your point, although I think the main reason for the confusion is the lack of a real legend there... I.e.: the reason it's like that is because the idea is that it's a comparison among 2 versions, not absolute benchmark times, so negative means one version is 'slower/worse' than another and blue means it's 'faster/better' (as a reference, Eclipse also uses the same format for reporting it -- e.g.: http://download.eclipse.org/eclipse/downloads/drops4/R-4.5-201506032000/perf... ) I've added a legend now, so, hopefully it clears up the confusion ;) -- Fabio

On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny <fabiofz@gmail.com> wrote:
Thanks, but Zach almost has this working using codespeed (he's still waiting on a review from infrastructure, I think). The server was not in fact running; a large part of what Zach did was to get that server set up. I don't know what it would take to export the data to another consumer, but if you want to work on that I'm guessing there would be no objection. And I'm sure there would be no objection if you want to get involved in maintaining the benchmark server! There's also an Intel project posted about here recently that checks individual benchmarks for performance regressions and posts the results to python-checkins. --David

On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" <python-dev-bounces+david.c.stewart=intel.com@python.org on behalf of rdmurray@bitdance.com> wrote:
The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ Dave

On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C <david.c.stewart@intel.com
wrote:
Hi Dave, Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). -- Fabio

From: Fabio Zadrozny <fabiofz@gmail.com<mailto:fabiofz@gmail.com>> Date: Tuesday, December 1, 2015 at 1:36 AM To: David Stewart <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> Cc: "R. David Murray" <rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>>, "python-dev@python.org<mailto:python-dev@python.org>" <python-dev@python.org<mailto:python-dev@python.org>> Subject: Re: [Python-Dev] Avoiding CPython performance regressions On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> wrote: On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" <python-dev-bounces+david.c.stewart=intel.com@python.org<mailto:intel.com@python.org> on behalf of rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>> wrote:
The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ Hi Dave, Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). Fabio – my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. Dave

On 12/1/15, 7:26 AM, "Python-Dev on behalf of Stewart, David C" <python-dev-bounces+david.c.stewart=intel.com@python.org on behalf of david.c.stewart@intel.com> wrote:
Fabio – my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied).
Whoops - silly me - today is a national holiday in Romania where Stefan lives so might not get an answer until tomorrow. :-/

On 12/1/15, 10:56 AM, "Maciej Fijalkowski" <fijall@gmail.com> wrote:
Hi David.
Any reason you run a tiny tiny subset of benchmarks?
We could always run more. There are so many in the full set in https://hg.python.org/benchmarks/ with such divergent results that it seems hard to see the forest because there are so many trees. I'm more interested in gradually adding to the set rather than the huge blast of all of them in daily email. Would you disagree? Part of the reason that I monitor ssbench so closely on Python 2 is that Swift is a major element in cloud computing (and OpenStack in particular) and has ~70% of its cycles in Python. We are really interested in workloads which are representative of the way Python is used by a lot of people and which produce repeatable results. (and which are open source). Do you have a suggestions? Dave

On Tue, Dec 1, 2015 at 9:04 PM, Stewart, David C <david.c.stewart@intel.com> wrote:
Last time I checked, Swift was quite a bit faster under pypy :-)
We are really interested in workloads which are representative of the way Python is used by a lot of people and which produce repeatable results. (and which are open source). Do you have a suggestions?
You know our benchmark suite (https://bitbucket.org/pypy/benchmarks), we're gradually incorporating what people report. That means that (Typically) it'll be open source library benchmarks, if they get to the point of writing some. I have for example coming django ORM benchmark, can show you if you want. I don't think there is a "representative benchmark" or maybe even "representative set", also because open source code tends to be higher quality and less spaghetti-like than closed source code that I've seen, but we're adding and adding. Cheers, fijal

Hi Fabio, Let me know if you have any questions related to the Python benchmarks run nightly in Intel’s 0-Day Lab. Thanks, Stefan From: "Stewart, David C" <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> Date: Tuesday 1 December 2015 at 17:26 To: Fabio Zadrozny <fabiofz@gmail.com<mailto:fabiofz@gmail.com>> Cc: "R. David Murray" <rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>>, "python-dev@python.org<mailto:python-dev@python.org>" <python-dev@python.org<mailto:python-dev@python.org>>, Stefan A Popa <stefan.a.popa@intel.com<mailto:stefan.a.popa@intel.com>> Subject: Re: [Python-Dev] Avoiding CPython performance regressions From: Fabio Zadrozny <fabiofz@gmail.com<mailto:fabiofz@gmail.com>> Date: Tuesday, December 1, 2015 at 1:36 AM To: David Stewart <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> Cc: "R. David Murray" <rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>>, "python-dev@python.org<mailto:python-dev@python.org>" <python-dev@python.org<mailto:python-dev@python.org>> Subject: Re: [Python-Dev] Avoiding CPython performance regressions On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C <david.c.stewart@intel.com<mailto:david.c.stewart@intel.com>> wrote: On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" <python-dev-bounces+david.c.stewart=intel.com@python.org<mailto:intel.com@python.org> on behalf of rdmurray@bitdance.com<mailto:rdmurray@bitdance.com>> wrote:
The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ Hi Dave, Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). Fabio – my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. Dave

Hi Thanks for doing the work! I'm on of the pypy devs and I'm very interested in seeing this getting somewhere. I must say I struggle to read the graph - is red good or is red bad for example? I'm keen to help you getting anything you want to run it repeatedly. PS. The intel stuff runs one benchmark in a very questionable manner, so let's maybe not rely on it too much. On Mon, Nov 30, 2015 at 3:52 PM, R. David Murray <rdmurray@bitdance.com> wrote:

On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski <fijall@gmail.com> wrote:
Hi Maciej, Great, it'd be awesome having data on multiple Python VMs (my latest target is really having a way to compare across multiple VMs/versions easily and help each implementation keep a focus on performance). Ideally, a single, dedicated machine could be used just to run the benchmarks from multiple VMs (one less variable to take into account for comparisons later on, as I'm not sure it'd be reliable to normalize benchmark data from different machines -- it seems Zach was the one to contact from that, but if there's such a machine already being used to run PyPy, maybe it could be extended to run other VMs too?). As for the graph, it should be easy to customize (and I'm open to suggestions). In the case, as it is, red is slower and blue is faster (so, for instance in https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time, the fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline). I've updated the comments to make it clearer (and changed the second graph to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for the individual benchmarks. Best Regards, Fabio

On Tue, Dec 1, 2015 at 11:49 AM, Fabio Zadrozny <fabiofz@gmail.com> wrote:
There is definitely a machine available. I suggest you ask python-infra list for access. It definitely can be used to run more than just pypy stuff. As for normalizing across multiple machines - don't even bother. Different architectures make A LOT of difference, especially with cache sizes and whatnot, that seems to have different impact on different loads. As for graph - I like the split on the benchmarks and a better description (higher is better) would be good. I have a lot of ideas about visualizations, pop in on IRC, I'm happy to discuss :-) Cheers, fijal

On Tue, Dec 1, 2015 at 8:14 AM, Maciej Fijalkowski <fijall@gmail.com> wrote:
Ok, I mailed infrastructure(at)python.org to see how to make it work. I did add a legend now, so, it should be much easier to read already ;) As for ideas on visualizations, I definitely want to hear about suggestions on how to improve it, although I'll start focusing on having the servers to get benchmark data running and will move on to improving the graphs right afterwards. Cheers, Fabio
Cheers, fijal

On Tue, Dec 1, 2015 at 9:35 AM, Victor Stinner <victor.stinner@gmail.com> wrote:
Humm, I understand your point, although I think the main reason for the confusion is the lack of a real legend there... I.e.: the reason it's like that is because the idea is that it's a comparison among 2 versions, not absolute benchmark times, so negative means one version is 'slower/worse' than another and blue means it's 'faster/better' (as a reference, Eclipse also uses the same format for reporting it -- e.g.: http://download.eclipse.org/eclipse/downloads/drops4/R-4.5-201506032000/perf... ) I've added a legend now, so, hopefully it clears up the confusion ;) -- Fabio
participants (6)
-
Fabio Zadrozny
-
Maciej Fijalkowski
-
Popa, Stefan A
-
R. David Murray
-
Stewart, David C
-
Victor Stinner