11 Oct
2016
11 Oct
'16
1:52 p.m.
On 11 October 2016 at 03:15, Elliot Gorokhovsky
There's an option to provide setup code, of course, but I need to set up before each trial, not just before the loop.
Typically, I would just run the benchmark separately for each case, and then you'd do # Case 1 python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' [Results 1] # Case 2 python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' [Results 2] The other advantage of doing it this way is that you can post your benchmark command lines, which will allow people to see what you're timing, and if there *are* any problems (such as a method lookup that skews the results) people can point them out. Paul