constant in python?
Bengt Richter
bokr at accessone.com
Sat Aug 18 15:22:32 EDT 2001
On Sat, 18 Aug 2001 18:59:30 +0200, "Alex Martelli" <aleaxit at yahoo.com> wrote:
>"Michael Ströder" <michael at stroeder.com> wrote in message
>news:3B7E5355.675D569B at stroeder.com...
>> Alex Martelli wrote:
>> >
>> > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/65207
>> >
>> > If something can be done with a few lines of code in the
>> > language as it is, it requires very strong motivation for that
>> > something to become a built-in language mechanism:
>>
>> I would guess that the recipe above causes some performance penalty.
>
>Why guess, when you can measure?
>
>Say const.py is like in the recipe, normal.py is an empty
>normal module, and test.py contains:
>
>import time, const, normal
>
>start = time.clock()
>const.magic = 23
>sum = 0L
>for i in range(100000):
> sum += const.magic
>stend = time.clock()
>
>print "const : %5.2f"%(stend-start)
>
>start = time.clock()
>normal.magic = 23
>sum = 0L
>for i in range(100000):
> sum += normal.magic
>stend = time.clock()
>
>print "normal: %5.2f"%(stend-start)
>
>Here are three runs on my box:
>
>D:\ian\good>python test.py
>const : 1.44
>normal: 1.60
>
>D:\ian\good>python test.py
>const : 1.63
>normal: 1.53
>
>D:\ian\good>python test.py
>const : 1.41
>normal: 1.53
>
>See? No statistically significant difference penalizes the
>const module vs a normal module -- it so happens the
>loop using the const module was faster two times out
>of three, but this is indeed happenstance.
>
>Moral: never guess at performance -- it often baffles the
>intuition even of the most accomplished expert. MEASURE.
>
I agree, but the happenstances can be separated out to a fair degree,
even if you're on line and background stuff may be happening:
(Not very elegant cutting and pasting, but here's what I like to do :)
_______________________________________________
import time, const, normal
const.magic = 23
normal.magic = 23
sumc = 0L
sumn = 0L
cdir={}
ndir={}
for i in range(100000):
start = time.clock()
sumc += const.magic
stend = time.clock()
sumn += normal.magic
stend2 = time.clock()
dt = int((stend-start)*1.0e9) #nanoseconds
if cdir.has_key(dt):
cdir[dt] += 1
else:
cdir[dt] = 1
dt = int((stend2-stend)*1.0e9) #nanoseconds
if ndir.has_key(dt):
ndir[dt] += 1
else:
ndir[dt] = 1
print '-- const case --'
vk = [(v,k) for k,v in cdir.items()]
vk.sort()
vk.reverse()
for v,k in vk[:10]:
print "%7d %10d" % ( v, k )
print '-- normal case --'
vk = [(v,k) for k,v in ndir.items()]
vk.sort()
vk.reverse()
for v,k in vk[:10]:
print "%7d %10d" % ( v, k )
__________________
Result: (NT4sp3,300mhzP2,320MB ram, Python 2.1)
-- const case --
54910 31847
21574 31009
20695 32685
1883 33523
327 30171
119 34361
72 35199
54 38552
49 39390
28 36038
-- normal case --
55261 30171
26861 31009
14372 29333
2785 31847
305 32685
51 37714
41 28495
31 38552
27 36876
19 39390
But note that you don't have to run 100k trials
to get useful results, since you see the outliers
that might otherwise get averaged in. Here's for 1k:
-- const case --
549 31009
293 30171
135 31847
11 32685
6 29333
1 1063542
1 364571
1 46095
1 35199
1 34361
-- normal case --
646 29333
205 28495
128 30171
13 31009
1 402285
1 99733
1 93028
1 68723
1 63695
1 54476
More information about the Python-list
mailing list