Garbage collection

Tom Wright tew24 at
Wed Mar 21 16:32:17 CET 2007

skip at wrote:
>     Tom> ...and then I allocate a lot of memory in another process (eg.
>     open Tom> a load of files in the GIMP), then the computer swaps the
>     Python
>     Tom> process out to disk to free up the necessary space.  Python's
>     Tom> memory use is still reported as 953 MB, even though nothing like
>     Tom> that amount of space is needed.  From what you said above, the
>     Tom> problem is in the underlying C libraries, but is there anything I
>     Tom> can do to get that memory back without closing Python?
> Not really.  I suspect the unused pages of your Python process are paged
> out, but that Python has just what it needs to keep going.

Yes, that's what's happening.

> Memory contention would be a problem if your Python process wanted to keep
> that memory active at the same time as you were running GIMP.

True, but why does Python hang on to the memory at all?  As I understand it,
it's keeping a big lump of memory on the int free list in order to make
future allocations of large numbers of integers faster.  If that memory is
about to be paged out, then surely future allocations of integers will be
*slower*, as the system will have to:

1) page out something to make room for the new integers
2) page in the relevant chunk of the int free list
3) zero all of this memory and do any other formatting required by Python

If Python freed (most of) the memory when it had finished with it, then all
the system would have to do is:

1) page out something to make room for the new integers
2) zero all of this memory and do any other formatting required by Python

Surely Python should free the memory if it's not been used for a certain
amount of time (say a few seconds), as allocation times are not going to be
the limiting factor if it's gone unused for that long.  Alternatively, it
could mark the memory as some sort of cache, so that if it needed to be
paged out, it would instead be de-allocated (thus saving the time taken to
page it back in again when it's next needed)

> I think the process's resident size is more important here than virtual
> memory size (as long as you don't exhaust swap space). 

True in theory, but the computer does tend to go rather sluggish when paging
large amounts out to disk and back.  Surely the use of virtual memory
should be avoided where possible, as it is so slow?  This is especially
true when the contents of the blocks paged out to disk will never be read

I've also tested similar situations on Python under Windows XP, and it shows
the same behaviour, so I think this is a Python and/or GCC/libc issue,
rather than an OS issue (assuming Python for linux and Python for windows
are both compiled with GCC).

I'm at CAMbridge, not SPAMbridge

More information about the Python-list mailing list