Cryptographically random numbers
steve at REMOVETHIScyber.com.au
Tue Mar 7 23:33:37 CET 2006
On Tue, 07 Mar 2006 22:32:04 +0100, Fredrik Lundh wrote:
>> Python lists have a special efficiency hack so that ret.append doesn't
>> copy the whole list around, but rather, allocates space in bigger
>> chunks so that appending usually takes constant time.
> in 2.4 and later, += on strings does the operation in place when
> possible. this means that += is often faster than append/join.
As much as we all love optimizations, I'm unhappy about this one. It used
to be a real simple rule: to append lots of strings, use append/join. That
worked for any Python you used.
But now code that runs really fast in CPython 2.4 will run like a dog in
Jython or CPython 2.3. Python's implementation independence is weakened.
Worse, because nobody really knows what "often" means (or at least, those
who know haven't said) a conscientious Python developer who wants fast
performance now has to measure all his string appending code using both
idioms to be sure that *this particular case* is/isn't one of the "often".
Is there some heuristic for telling when CPython can do the in-place
appending, and when it can't? If we were talking about a tiny (1% say)
performance penalty for getting it wrong, then it wouldn't matter, but the
penalty for using += when the optimization doesn't apply is severe.
More information about the Python-list