[Python-ideas] Timing hefty or long-running blocks of code

Masklinn masklinn at masklinn.net
Sun Jun 2 17:20:09 CEST 2013


On 2013-06-02, at 17:00 , Ron Adam wrote:
> On 06/02/2013 12:29 AM, Steven D'Aprano wrote:
>> 
>> To put it another way, you can assume that each timeit result is made up of
>> two components:
>> 
>> - the "actual" or "real" time that the code would take in a perfect world
>> where no other processes are running and your code is run at the highest
>> speed possible;
>> 
>> - plus some unknown error, due to external factors, overhead of the timing
>> code, etc.
>> 
>> Note that the error is always strictly positive, never negative or zero. So
>> the min() of the timing results is the best estimate of the "actual" time.
>> Any other estimate, including mean or median, will be higher than the
>> minimum, and therefore less accurate.
> 
> Yes to all that, and in addition, it only gives results for a set of very limited conditions.
> 
> When doing larger blocks of code (hefty or long-running), there may be different sub-paths of execution that result in a wide range of times when in actual use.
> 
> In those cases, while the minimum is nice to know, I want to identify the sections of code and conditions that take the longest.  Those are the parts I will try to make more efficient.
> 
> 
> I like your practical description of just wanting to know how long something will take in a hand-wavy way.
> 
> My thoughts this morning is that it would be nice to have both your stopwatch context timer, along with timers for use in running programs.
> 
> 
> For example if a program I'm working on seems a bit slower than I think it should be and want to look into how it can be made more efficient.

That's a profile. Note that you can add both timing and profiling with
low syntactic overhead (via function decorators) through the
profilehooks package (available on pypi): profilehooks.profile will do
profiling runs on the function it decorates when that function is run,
profilehook.timecall will just print the function's runtime. Both can be
either immediate (print whatever's been collected for a call at the end
of the call) or not immediate (print a summary of all calls when the
program terminates):

>>> import profilehooks
>>> @profilehooks.timecall
... def immediate(n):
...     list(xrange(n))
... 
>>> @profilehooks.timecall(immediate=False)
... def deferred(n):
...     list(xrange(n))
... 
>>> immediate(40000)

  immediate (<stdin>:1):
    0.002 seconds

>>> immediate(400000)

  immediate (<stdin>:1):
    0.023 seconds

>>> immediate(4000000)

  immediate (<stdin>:1):
    0.198 seconds

>>> deferred(40000)
>>> deferred(400000)
>>> deferred(4000000)
>>> ^D

  deferred (<stdin>:1):
    3 calls, 0.155 seconds (0.052 seconds per call)



More information about the Python-ideas mailing list