[Tutor] timeit: 10million x 1 Vs 1million x 10
DoanVietTrungAtGmail
doanviettrung at gmail.com
Thu Feb 28 03:27:46 CET 2013
Dear tutors
My function below simply populates a large dict. When measured by timeit
populating 10 million items once, versus populating 1 million items ten
times, the times are noticeably different:
---
import timeit
N = 10000000 # This constant's value is either 10 million or 1 million
testDict = {}
def writeDict(N):
for i in xrange(N):
testDict[i] = [i, [i + 1, i + 2], i + 3]
print timeit.Timer('f(N)', 'from __main__ import N, writeDict as
f').timeit(1) # the 'number' parameter is either 1 or 10
---
Result from 3 runs of 10 million x 1 time: 12.7655465891, 13.1248426525,
12.1611512459
Result from 3 runs of 1 million x 10 times:
14.3727692498, 14.3825673988, 14.4390314636
I ran Python 2.7 on Pycharm on Windows 7.
My guess is that this discrepancy is a result of either how some sort of
overhead in timeit, or of Python having to allocate memory space for a dict
10 times. What do you think, and how to find out for sure?
Second (for me, this question is more important), how to improve
performance? (I tried a tuple rather than a list for the dict values, it
was slightly faster, but I need dict items to be mutable)
Thanks
Trung Doan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/tutor/attachments/20130228/16117388/attachment.html>
More information about the Tutor
mailing list