Optimizing list processing
ben+python at benfinney.id.au
Thu Dec 12 02:18:30 CET 2013
Steven D'Aprano <steve+comp.lang.python at pearwood.info> writes:
> For giant iterables (ten million items), this version is a big
> improvement, about three times faster than the list comp version. […]
> Except that for more reasonably sized iterables, it's a pessimization.
> With one million items, the ratio is the other way around: the list
> comp version is 2-3 times faster than the in-place version. For
> smaller lists, the ratio varies, but the list comp version is
> typically around twice as fast. A good example of trading memory for
> Is there any way to determine which branch I should run, apart from
> hard- coding some arbitrary and constant cut-off value?
Hmm. The code isn't going to be able to accurately judge in advance the
time it'll take to run. But perhaps it could quickly and accurately
calculate the memory usage of your data structure? Is that useful for
determining which branch to take?
\ “The fact that a believer is happier than a skeptic is no more |
`\ to the point than the fact that a drunken man is happier than a |
_o__) sober one.” —George Bernard Shaw |
More information about the Python-list