Le mar. 26 févr. 2019 à 22:45, Raymond Hettinger email@example.com a écrit :
Victor said he generally doesn't care about 5% regressions. That makes sense for odd corners of Python. The reason I was concerned about this one is that it hits the eval-loop and seems to effect every single op code. The regression applies somewhat broadly (increasing the cost of reading and writing local variables by about 20%). The effect is somewhat broad based.
I ignore changes smaller than 5% because they are usually what I call the "noise" of the benchmark. It means that testing 3 commits give 3 different timings, even if the commits don't touch anything used in the benchmark. There are multiple explanation: PGO compilation in not deterministic, some benchmarks are too close to the performance of the CPU L1-instruction cache and so are heavily impacted by the "code locality" (exact address in memory), and many other things.
Hum, sometimes running the same benchmark on the same code on the same hardware with the same strict procedure gives different timings at each attempt.
At some point, I decided to give up on these 5% to not loose my mind :-)