vze4rx4y at verizon.net
Tue Oct 19 10:12:56 CEST 2004
"bearophile" <bearophileHUGS at lycos.com> wrote in message
news:5c882bb5.0410180743.62059c68 at posting.google.com...
> Comparing versions for speed, Raymond Hettinger's version is quite
> slow (even 10 or more times slower than mine for certain kinds of
> Into the most nested cicle of Alex Martelli's version there is a
> function call to isatomic (and the try-except) that slow down the
> program a lot. Removing them (we expand lists only) we obtan the short
> Peter Otten's version, that is a bit better than mine for very nested
> lists and a bit worse for quite flat lists (that are the most common,
> I think).
That is an apples-to-oranges comparision. The loss of speed is due to the
additional features of not expanding strings and not expanding iterables into
memory all at one. Previous discussions about flatten() in this newsgroup or
the tutor list have indicated that is what is usually wanted. Take out the test
for the atomicity of strings and the speed is much improved. Also, from
original posting, it seemed that memory friendliness was a key measure for merit
given the huge data sizes and nesting depths.
This discussion may have over-abstracted the problem so that your
apples-to-oranges timings are irrelevant. Do you have real world use cases for
wanting to flatten nested containers of non-homogenous object types (a mix of
numbers, strings, dictionaries, etc)? Do you really want to split your strings
into streams of characters and then throw numbers in the mix. I think not.
More information about the Python-list