[Python-ideas] Keep free list of popular iterator objects
Tim Peters
tim.peters at gmail.com
Mon Sep 16 01:56:25 CEST 2013
[Kyle Fisher]
> I've realized that my original example is far too complex, so I've
> simplified it:
>
> Status quo:
> ./python -m timeit -r 100 -s "a=[1]" "iter(a)"
> 10000000 loops, best of 100: 0.0662 usec per loop
>
> With patch:
> ./python -m timeit -r 100 -s "a=[1]" "iter(a)"
> 10000000 loops, best of 100: 0.0557 usec per loop
> List iter allocations: 6
> List iter reuse through freelist: 1011111554
> 100.00% reuse rate
>
> Which seems to show a 15% speedup.
Nope! More like 19%. Sometime early in my career, benchmarks
universally changed from reporting speedups via:
(old - new) / old * 100
to
(old - new) / new * 100
and
>>> (0.0662 - 0.0557) / 0.0557 * 100
18.850987432675037
Why did they make this change? So that slowdowns were never shown as
worse than -100%, and especially so that there was no upper bound on
reported speedups ;-)
What I'm surprised by is that you didn't get a larger speedup. You
don't just save cycles allocating when using a free list, you also
save cycles when free'ing the object. When free'ing, obmalloc's
Py_ADDRESS_IN_RANGE alone may consume as many cycles as the free
list's combined allocation and deallocation work. While I tried to
make the common paths in obmalloc.c as fast as possible, it's still a
mostly "general purpose" allocator so has to worry about silly things
a dedicated free list can ignore (like: are they asking for 0 bytes?
asking for something small enough that I _can_ use one of my internal
free lists? if so, which one? did I pass out the pointer they're
asking me to free. or do I have to hand it off to someone else?).
I'm mostly with MAL (Marc Andre) on this one: it's worth doing if and
only if many "real world" programs would benefit. :Unfortunately,
there's never been a clear way to decide that :-(
More information about the Python-ideas
mailing list