Re: [Speed] Any changes we want to make to perf.py?
![](https://secure.gravatar.com/avatar/daa45563a98419bb1b6b63904ce71f95.jpg?s=120&d=mm&r=g)
I don't think that using a fixed number of iterations is good to get stable benchmark results. I opened the following issue to discussed that: https://bugs.python.org/issue26275
I proposed to calibrate the number of runs and the number of loops using time. I'm not convinced myself yet that it's a good idea.
For "runs" and "loops", I'm talking about something like that:
times = [] for run in range(runs): dt = time.perf_counter() for loop in loops: func() # or python instructions timed.append(dt - time.perf_counter())
Victor
2016-02-11 19:31 GMT+01:00 Brett Cannon <brett@python.org>:
Some people have brought up the idea of tweaking how perf.py drives the benchmarks. I personally wonder if we should go from a elapsed time measurement to # of executions in a set amount of time measurement to get a more stable number that's easier to measure and will make sense even as Python and computers get faster (I got this idea from Mozilla's Dromaeo benchmark suite: https://wiki.mozilla.org/Dromaeo).
Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed
participants (1)
-
Victor Stinner