[Python-Dev] Windows Low Fragementation Heap yields speedup of ~15%

Tim Peters tim.peters at gmail.com
Fri Feb 18 04:38:08 CET 2005


[Gfeller Martin]
> what immediately comes to mind are Modules/cPickle.c and
> Modules/cStringIO.c, which (I believe) are heavily used by ZODB (which in turn
> is heavily used by the application).

I probably guessed right the first time <wink>:  LFH doesn't help with
the lists directly, but helps indirectly by keeping smaller objects
out of the general heap where the list guts actually live.

Say we have a general heap with a memory map like this, meaning a
contiguous range of available memory, where 'f' means a block is free.
 The units of the block don't really matter, maybe one 'f' is one
byte, maybe one 'f' is 4MB -- it's all the same in the end:

fffffffffffffffffffffffffffffffffffffffffffffff

Now you allocate a relatively big object (like the guts of a large
list), and it's assigned a contiguous range of blocks marked 'b':

bbbbbbbbbbbbbbbffffffffffffffffffffffffffffffff

Then you allocate a small object, marked 's':

bbbbbbbbbbbbbbbsfffffffffffffffffffffffffffffff

The you want to grow the big object.  Oops!  It can't extend the block
of b's in-place, because 's' is in the way.  Instead it has to copy
the whole darn thing:

fffffffffffffffsbbbbbbbbbbbbbbbffffffffffffffff

But if 's' is allocated from some _other_ heap, then the big object
can grow in-place, and that's much more efficient than copying the
whole thing.

obmalloc has two primary effects:  it manages a large number of very
small (<= 256 bytes) memory chunks very efficiently, but it _also_
helps larger objects indirectly, by keeping the very small objects out
of the platform C malloc's way.

LFH appears to be an extension of the same basic idea, raising the
"small object" limit to 16KB.

Now note that pymalloc and LFH are *bad* ideas for objects that want
to grow.  pymalloc and LFH segregate the memory they manage into
blocks of different sizes.  For example, pymalloc keeps a list of free
blocks each of which is exactly 64 bytes long.  Taking a 64-byte block
out of that list, or putting it back in, is very efficient.  But if an
object that uses a 64-byte block wants to grow, pymalloc can _never_
grow it in-place, it always has to copy it.  That's a cost that comes
with segregating memory by size, and for that reason Python
deliberately doesn't use pymalloc in several cases where objects are
expected to grow over time.

One thing to take from that is that LFH can't be helping list-growing
in a direct way either, if LFH (as seems likely) also needs to copy
objects that grow in order to keep its internal memory segregated by
size.  The indirect benefit is still available, though:  LFH may be
helping simply by keeping smaller objects out of the general heap's
hair.

> The lists also get fairly large, although not huge - up to typically 50000
> (complex) objects in the tests I've measured.

That's much larger than LFH can handle.  Its limit is 16KB.  A Python
list with 50K elements requires a contiguous chunk of 200KB on a
32-bit machine to hold the list guts.

> As I said, I don't speak C, so I can only speculate - do the lists at some point
>grow beyond the upper limit of obmalloc, but are handled by the LFH
(which has a
> higher upper limit, if I understood Tim Peters correctly)?

A Python list object comprises two separately allocated pieces of
memory.  First is a list header, a small piece of memory of fixed
size, independent of len(list).  The list header is always obtained
from obmalloc; LFH will never be involved with that, and neither will
the system malloc.  The list header has a pointer to a separate piece
of memory, which contains the guts of a list, a contiguous vector of
len(list) pionters (to Python objects).  For a list of length n, this
needs 4*n bytes on a 32-bit box.  obmalloc never manages that space,
and for the reason given above:  we expect that list guts may grow,
and obmalloc is meant for fixed-size chunks of memory.

So the list guts will get handled by LFH, until the list needs more
than 4K entries (hitting the 16KB LFH limit).  Until then, LFH
probably wastes time by copying growing list guts from size class to
size class.  Then the list guts finally get copied to the general
heap, and stay there.

I'm afraid the only you can know for sure is by obtaining detailed
memory maps and analyzing them.


More information about the Python-Dev mailing list