time.clock() or Windows bug?
mgenti
mgenti at gentiweb.com
Thu Jun 12 11:12:16 EDT 2008
Don't forget that timeit module uses time.clock on windows as well:
if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time
http://svn.python.org/view/python/trunk/Lib/timeit.py
--Mark
On Jun 9, 5:30 am, Nick Craig-Wood <n... at craig-wood.com> wrote:
> Theo v. Werkhoven <t... at van-werkhoven.nl.invalid> wrote:
>
>
>
> > The carbonbased lifeform Nick Craig-Wood inspired comp.lang.python with:
> > > Theo v. Werkhoven <t... at van-werkhoven.nl.invalid> wrote:
> > >> Output:
> > >> Sample 1, at 0.0 seconds from start; Output power is: 8.967 dBm
> > > [snip]
> > >> Sample 17, at 105.7 seconds from start; Output power is: 9.147 dBm
> > >> Sample 18, at 112.4 seconds from start; Output power is: 9.284 dBm
> > >> Sample 19, at 119.0 seconds from start; Output power is: 9.013 dBm
> > >> Sample 20, at 125.6 seconds from start; Output power is: 8.952 dBm
> > >> Sample 21, at 91852.8 seconds from start; Output power is: 9.102 dBm
> > >> Sample 22, at 91862.7 seconds from start; Output power is: 9.289 dBm
> > >> Sample 23, at 145.4 seconds from start; Output power is: 9.245 dBm
> > >> Sample 24, at 152.0 seconds from start; Output power is: 8.936 dBm
> > > [snip]
> > >> But look at the timestamps of samples 21, 22 and 43.
> > >> What is causing this?
> > >> I've replaced the time.clock() with time.time(), and that seems to
> > >> solve the problem, but I would like to know if it's something I
> > >> misunderstand or if it's a problem with the platform (Windows Server
> > >> 2003) or the time.clock() function.
>
> > > time.clock() uses QueryPerformanceCounter under windows. There are
> > > some known problems with that (eg with Dual core AMD processors).
>
> > > Seehttp://msdn.microsoft.com/en-us/library/ms644904.aspx
>
> > > And in particular
>
> > > On a multiprocessor computer, it should not matter which processor
> > > is called. However, you can get different results on different
> > > processors due to bugs in the basic input/output system (BIOS) or
> > > the hardware abstraction layer (HAL). To specify processor
> > > affinity for a thread, use the SetThreadAffinityMask function.
>
> > Alright, that explains that then.
>
> > > I would have said time.time is what you want to use anyway though
> > > because under unix time.clock() returns the elapsed CPU time which is
> > > not what you want at all!
>
> > You're right, using fuctions that do not work cross platform isn't
> > smart.
>
> Actually there is one good reason for using time.clock() under Windows
> - because it is much higher precision than time.time(). Under Windows
> time.time() is only accurate at best 1ms, and in fact it is a lot
> worse than that.
>
> Under Win95/98 it has a 55ms granularity and under Vista time().time()
> changes in 15ms or 16ms steps.
>
> Under unix, time.clock() is pretty much useless because it measures
> CPU time (with a variable precision, maybe 10ms, maybe 1ms), and
> time.time() has a precision of about 1us (exact precision depending on
> lots of things!).
>
> """
> Test timing granularity
>
> Under Vista this produces
>
> C:\>time_test.py
> 15000us - 40 times
> 16000us - 60 times
>
> Under linux 2.6 this produces
>
> $ python time_test.py
> 1us - 100 times
>
> """
>
> from time import time
>
> granularities = {}
>
> for i in range(100):
> x = time()
> j = 0
> while 1:
> y = time()
> if x != y:
> dt = int(1000000*(y - x)+0.5)
> granularities[dt] = granularities.get(dt, 0) + 1
> break
> j += 1
>
> dts = granularities.keys()
> dts.sort()
> for dt in dts:
> print "%7dus - %3d times" % (dt, granularities[dt])
>
> --
> Nick Craig-Wood <n... at craig-wood.com> --http://www.craig-wood.com/nick
More information about the Python-list
mailing list