[Python-Dev] Benchmarking the int allocator (Was: Type of range object members)
"Martin v. Löwis"
martin at v.loewis.de
Wed Aug 16 21:52:13 CEST 2006
Guido van Rossum schrieb:
> I think the test isn't hardly focused enough on int allocation. I
> wonder if you could come up with a benchmark that repeatedly allocates
> 100s of 1000s of ints and then deletes them?
The question is: where to store them? In a pre-allocated list, or in a
growing list?
def t3():
m = [0]*100000
s = time.time()
i = 0
for i in xrange(100):
for k in xrange(100000):
m[k] = k
print "Test 3",time.time()-s
def t4():
s = time.time()
i = 0
for i in xrange(100):
m = []
for k in xrange(100000):
m.append(k)
print "Test 4",time.time()-s
This is 100s of 1000s of ints in the inner loop; test 3 puts them
into a pre-allocated list, test 4 discards the list each time and
lets it grow.
> What if it also allocates
> other small objects so the ints become more fragmented?
This allocator doesn't bother much about fragmentation: it's
constant-time most of the time on allocation, and often on
deallocation (especially when the memory is fragmented).
Also, it's hard to find an object that is as small as an
int; I think a one-element tuple applies:
def t5():
s = time.time()
i = 0
for i in xrange(100):
m = []
for k in xrange(100000):
m.append((k,))
print "Test 5",time.time()-s
The timings, for the best of three runs:
Py2.5 +obmalloc-for-int slowdown
Test 3 1.8s 2.1s 15%
Test 4 3.6s 3.8s 5%
test 5 7.5s 7.5s 0
Regards,
Martin
More information about the Python-Dev
mailing list