Unexpected timing results
Kent Johnson
kent at kentsjohnson.com
Thu Feb 23 11:26:09 EST 2006
Steven D'Aprano wrote:
> I have two code snippets to time a function object being executed. I
> expected that they should give roughly the same result, but one is more
> than an order of magnitude slower than the other.
>
> Here are the snippets:
>
> def timer1():
> timer = time.time
> func = lambda : None
> itr = [None] * 1000000
> t0 = timer()
> for _ in itr:
> func()
> t1 = timer()
> return t1 - t0
>
> def timer2():
> timer = time.time
> func = lambda : None
> itr = [None] * 1000000
> t = 0.0
> for _ in itr:
> t0 = timer()
> func()
> t1 = timer()
> t += t1 - t0
> return t
>
> Here are the results:
>
>
>>>>timer1()
>
> 0.54168200492858887
>
>>>>timer2()
>
> 6.1631934642791748
>
> Of course I expect timer2 should take longer to execute in total,
> because it is doing a lot more work. But it seems to me that all that
> extra work should not affect the time measured, which (I imagine) should
> be about the same as timer1. Possibly even less as it isn't timing the
> setup of the for loop.
>
> Any ideas what is causing the difference? I'm running Python 2.3 under
> Linux.
You are including the cost of one call to timer() in each loop: The cost
of returning the time from the first call to timer() and the the cost of
getting the time in the second call. On my computer with Python 2.4:
D:\Projects\CB>python -m timeit -s "import time;timer=time.time" "t=timer()"
1000000 loops, best of 3: 0.498 usec per loop
which is almost exactly the same as the difference between timer1() and
timer2() on my machine:
timer1 0.30999994278
timer2 0.812000274658
using Python 2.4.2 on Win2k.
Kent
More information about the Python-list
mailing list