How to measure memory footprint of Python objects?

AdrianNg3 adrian.neagu at
Wed Sep 20 15:08:47 CEST 2006

Fredrik Lundh wrote:
> Neagu, Adrian wrote:
> > I have a python program that takes a lot of memory (>hundred Mb).
> > I can see the total process size of the Python process (Task manager on MS
> > Win or Unix "ps" command) but that is not precise enough for me
> I'm not sure those two statements are compatible, though.
> if your program is using hundreds of megabytes, surely kilobyte
> precision should be good enough for you ?
Hi Fredrik,

I'll be more precise.

1) Indeed, a few kilobytes are no problem for me. For example, if I
have to write a small function to get my mem size and that function
will allocate a few Python objects that will bias the end result,
that's still OK.

2) The overhead of the Python execution engine in the total size of the
process (C Python, JVM, ...) is more than just "a few kilobytes". As a
last resort, this can be ignored for my purpose at hand (it is a
constant in my comparison of different generations of my Python
application) but it is not really nice (for example, I cannot
meanigfully compare the memory footprint of only my application between

3) The real problem with OS-based size of process is the evolution over
time. On MS Win for example, the size of the process is ever-growing
(unless a MS specific consolidation function is called) leading to the
fact that the size of the process and the actual size of the Python
heap(s) has nothing to do with each other towards the end of the
program. I believe that the max size of the process is an indication of
the max size of the Python heap(s) but I'm not sure at all how good as
an indication is that (what about different OSes?).

Anyway, would it be much simpler (for the Python programmer) and much
faster (at run-time) to surface this functionality in the sys module?


More information about the Python-list mailing list