[Speed] Any changes we want to make to perf.py?
Victor Stinner
victor.stinner at gmail.com
Thu Feb 11 17:27:35 EST 2016
I don't think that using a fixed number of iterations is good to get
stable benchmark results. I opened the following issue to discussed
that:
https://bugs.python.org/issue26275
I proposed to calibrate the number of runs and the number of loops
using time. I'm not convinced myself yet that it's a good idea.
For "runs" and "loops", I'm talking about something like that:
times = []
for run in range(runs):
dt = time.perf_counter()
for loop in loops:
func() # or python instructions
timed.append(dt - time.perf_counter())
Victor
2016-02-11 19:31 GMT+01:00 Brett Cannon <brett at python.org>:
> Some people have brought up the idea of tweaking how perf.py drives the
> benchmarks. I personally wonder if we should go from a elapsed time
> measurement to # of executions in a set amount of time measurement to get a
> more stable number that's easier to measure and will make sense even as
> Python and computers get faster (I got this idea from Mozilla's Dromaeo
> benchmark suite: https://wiki.mozilla.org/Dromaeo).
>
> _______________________________________________
> Speed mailing list
> Speed at python.org
> https://mail.python.org/mailman/listinfo/speed
>
More information about the Speed
mailing list