[Python-checkins] r46146 - sandbox/trunk/rjsh-pybench/pybench.py

Steve Holden steve at holdenweb.com
Wed May 24 11:34:37 CEST 2006


M.-A. Lemburg wrote:
> M.-A. Lemburg wrote:
> 
>>Steve Holden wrote:
>>
>>>M.-A. Lemburg wrote:
>>>
>>>>steve.holden wrote:
>>>>
>>>>
>>>>>Author: steve.holden
>>>>>Date: Tue May 23 21:21:00 2006
>>>>>New Revision: 46146
>>>>>
>>>>>Modified:
>>>>>  sandbox/trunk/rjsh-pybench/pybench.py
>>>>>Log:
>>>>>Use the appropriate clock for the platform.
>>>>>Default the number of calibration runs to 0 (can set with -C)
>>>>
>>>>Defaulting to 0 is a bad idea, since you basically turn
>>>>off calibration altogether.
>>>>
>>>>Note that calibration is *very* important so
>>>>that only the operation itself is timed, not the setup,
>>>>loop and other logic not related to the test run.
>>>>
>>>
>>>I understand the purpose of the calibration and agree that omitting
>>>calibration could falsely skew the operation times. I don't feel the
>>>operation times are a particularly valuable performance parameter anyway
>>>- nobody is going to say "ah, well I'll use dict lookup because it's
>>>faster than instance creation".
>>
>>That's not the point: you want to measure and compare the operation,
>>not the setup time and loop mechanism. Both can have quite different
>>performance timings on different machines, since they usually
>>involve a lot more complex machinery than the simple operation
>>you are trying to time.
>>
>>If you add a parameter to change the number of rounds the calibration
>>is run, that's fine. Switching it off completely will make comparisons
>>between platforms generate wrong results.
>>
>>
>>>It may not be appropriate to check back into the trunk a version with
>>>the default number of calibration runs at zero. However, repeated
>>>testing has shown that there is sufficient variability in calibration
>>>times to throw some results into confusion on some platforms. Comparing
>>>a run with calibration to a run without, for example, does not (on
>>>Windows, at least) yield the uniform speedup I should have expected.
>>
>>In that case, you should try to raise the number of rounds
>>the calibration code is run. The default of 20 rounds may
>>not be enough to create an average on Windows.
>>
>>It appears that time.clock() measures wall time instead of
>>just the process time on Windows, so it's likely that the
>>calibration run result depends a lot on what else is going on
>>on the system at the time you run pybench.
>>
>>
>>>I therefore think we should at least retain the option to ignore the
>>>operation times and omit calibration as it will give a more reproducible
>>>measure of absolute execution speed.
>>
>>On the change of time.clock() to time.time(): the precision
>>of time.time() doesn't appear to be more accurate than
>>time.clock() on Windows. Is this documented somewhere ?
> 
> 
> It may actually be better to use the win32process API
> GetProcessTimes() directly:
> 
> http://aspn.activestate.com/ASPN/docs/ActivePython/2.4/pywin32/win32process__GetProcessTimes_meth.html
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/base/getprocesstimes.asp
> 
> provided the win32 package is installed.
> 
> This would be more in line with what time.clock() returns
> on Unix platforms, namely the process time... I wonder why
> time.clock() doesn't use this API on Windows.
> 
That's a pending change here thanks to Kristjan V Jonsson of CCP, who 
brought the same code to my attention yesterday. It shouldn't be a 
difficult fix, so I'll see how it affects reliability. On Windows I 
can't imnagine it will hurt ...

regards
  Steve
-- 
Steve Holden       +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd          http://www.holdenweb.com
Love me, love my blog  http://holdenweb.blogspot.com
Recent Ramblings     http://del.icio.us/steve.holden


More information about the Python-checkins mailing list