MOn Wed, Oct 14, 2020 at 8:03 AM Pablo Galindo Salgado <pablogsal@gmail.com> wrote:> Would it be possible rerun the tests with the currentsetup for say the last 1000 revisions or perhaps a subset of these
(e.g. every 10th revision) to try to binary search for the revision which
introduced the change ?Every run takes 1-2 h so doing 1000 would be certainly time-consuming :)Would it be possible instead to run git-bisect for only a _particular_ benchmark? It seems that may be all that’s needed to track down particular regressions. Also, if e.g. git-bisect is used it wouldn’t be every e.g. 10th revision but rather O(log(n)) revisions.—ChrisThat's why from now on I am trying to invest in daily builds for master,so we can answer that exact question if we detect regressions in the future._______________________________________________On Wed, 14 Oct 2020 at 15:04, M.-A. Lemburg <mal@egenix.com> wrote:On 14.10.2020 16:00, Pablo Galindo Salgado wrote:
>> Would it be possible to get the data for older runs back, so that
> it's easier to find the changes which caused the slowdown ?
>
> Unfortunately no. The reasons are that that data was misleading because
> different points were computed with a different version of pyperformance and
> therefore with different packages (and therefore different code). So the points
> could not be compared among themselves.
>
> Also, past data didn't include 3.9 commits because the data gathering was not
> automated and it didn't run in a long time :(
Make sense.
Would it be possible rerun the tests with the current
setup for say the last 1000 revisions or perhaps a subset of these
(e.g. every 10th revision) to try to binary search for the revision which
introduced the change ?
> On Wed, 14 Oct 2020 at 14:57, M.-A. Lemburg <mal@egenix.com
> <mailto:mal@egenix.com>> wrote:
>
> Hi Pablo,
>
> thanks for pointing this out.
>
> Would it be possible to get the data for older runs back, so that
> it's easier to find the changes which caused the slowdown ?
>
> Going to the timeline, it seems that the system only has data
> for Oct 14 (today):
>
> https://speed.python.org/timeline/#/?exe=12&ben=regex_dna&env=1&revs=1000&equid=off&quarts=on&extr=on&base=none
>
> In addition to unpack_sequence, the regex_dna test has slowed
> down a lot compared to Py3.8.
>
> https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_unpack_sequence.py
> https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_regex_dna.py
>
> Thanks.
>
> On 14.10.2020 15:16, Pablo Galindo Salgado wrote:
> > Hi!
> >
> > I have updated the branch benchmarks in the pyperformance server and now they
> > include 3.9. There are
> > some benchmarks that are faster but on the other hand some benchmarks are
> > substantially slower, pointing
> > at a possible performance regression in 3.9 in some aspects. In particular
> some
> > tests like "unpack sequence" are
> > almost 20% slower. As there are some other tests were 3.9 is faster, is
> not fair
> > to conclude that 3.9 is slower, but
> > this is something we should look into in my opinion.
> >
> > You can check these benchmarks I am talking about by:
> >
> > * Go here: https://speed.python.org/comparison/
> > * In the left bar, select "lto-pgo latest in branch '3.9'" and "lto-pgo latest
> > in branch '3.8'"
> > * To better read the plot, I would recommend to select a "Normalization"
> to the
> > 3.8 branch (this is in the top part of the page)
> > and to check the "horizontal" checkbox.
> >
> > These benchmarks are very stable: I have executed them several times over the
> > weekend yielding the same results and,
> > more importantly, they are being executed on a server specially prepared to
> > running reproducible benchmarks: CPU affinity,
> > CPU isolation, CPU pinning for NUMA nodes, CPU frequency is fixed, CPU
> governor
> > set to performance mode, IRQ affinity is
> > disabled for the benchmarking CPU nodes...etc so you can trust these numbers.
> >
> > I kindly suggest for everyone interested in trying to improve the 3.9 (and
> > master) performance, to review these benchmarks
> > and try to identify the problems and fix them or to find what changes
> introduced
> > the regressions in the first place. All benchmarks
> > are the ones being executed by the pyperformance suite
> > (https://github.com/python/pyperformance) so you can execute them
> > locally if you need to.
> >
> > ---
> >
> > On a related note, I am also working on the speed.python.org
> <http://speed.python.org>
> > <http://speed.python.org> server to provide more automation and
> > ideally some integrations with GitHub to detect performance regressions. For
> > now, I have done the following:
> >
> > * Recompute benchmarks for all branches using the same version of
> > pyperformance (except master) so they can
> > be compared with each other. This can only be seen in the "Comparison"
> > tab: https://speed.python.org/comparison/
> > * I am setting daily builds of the master branch so we can detect performance
> > regressions with daily granularity. These
> > daily builds will be located in the "Changes" and "Timeline" tabs
> > (https://speed.python.org/timeline/).
> > * Once the daily builds are working as expected, I plan to work on trying to
> > automatically comment or PRs or on bpo if
> > we detect that a commit has introduced some notable performance regression.
> >
> > Regards from sunny London,
> > Pablo Galindo Salgado.
> >
> > _______________________________________________
> > python-committers mailing list -- python-committers@python.org
> <mailto:python-committers@python.org>
> > To unsubscribe send an email to python-committers-leave@python.org
> <mailto:python-committers-leave@python.org>
> > https://mail.python.org/mailman3/lists/python-committers.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-committers@python.org/message/G3LB4BCAY7T7WG22YQJNQ64XA4BXBCT4/
> > Code of Conduct: https://www.python.org/psf/codeofconduct/
> >
>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Experts (#1, Oct 14 2020)
> >>> Python Projects, Coaching and Support ... https://www.egenix.com/
> >>> Python Product Development ... https://consulting.egenix.com/
> ________________________________________________________________________
>
> ::: We implement business ideas - efficiently in both time and costs :::
>
> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
> Registered at Amtsgericht Duesseldorf: HRB 46611
> https://www.egenix.com/company/contact/
> https://www.malemburg.com/
>
--
Marc-Andre Lemburg
eGenix.com
Professional Python Services directly from the Experts (#1, Oct 14 2020)
>>> Python Projects, Coaching and Support ... https://www.egenix.com/
>>> Python Product Development ... https://consulting.egenix.com/
________________________________________________________________________
::: We implement business ideas - efficiently in both time and costs :::
eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611
https://www.egenix.com/company/contact/
https://www.malemburg.com/
python-committers mailing list -- python-committers@python.org
To unsubscribe send an email to python-committers-leave@python.org
https://mail.python.org/mailman3/lists/python-committers.python.org/
Message archived at https://mail.python.org/archives/list/python-committers@python.org/message/LBEAVPI5WT6ZV5RKCKHW3EWXLDY534IQ/
Code of Conduct: https://www.python.org/psf/codeofconduct/