Is there any way to minimize str()/unicode() objects memory usage [Python 2.6.4] ?

dmtr dchichkov at
Sat Aug 7 10:32:45 CEST 2010

> Looking at your benchmark, random.choice(letters) has probably less overhead
> than letters[random.randint(...)]. You might even try to inline it as

Right... random.choice()...  I'm a bit new to python, always something
to learn. But anyway in that benchmark (from
) the code that generate 'words' takes 90% of the time. And I'm really
looking at deltas between different methods, not the absolute value. I
was also using different code to get benchmarks for my previous
message... Here's the code:

# -*- coding: utf-8  -*-
import os, time, re, array

start = time.time()
d = dict()
for i in xrange(0, 1000000): d[unicode(i).encode('utf-8')] =
array.array('i', (i, i+1, i+2, i+3, i+4, i+5, i+6))
dt = time.time() - start
vm = re.findall("(VmPeak.*|VmSize.*)", open('/proc/%d/status' %
print "%d keys, %s, %f seconds, %f keys per second" % (len(d), vm, dt,
len(d) / dt)

More information about the Python-list mailing list