Memory not being released?

Gordon McMillan gmcm at
Mon Nov 15 19:18:46 CET 1999

Steve Tregidgo writes:

> I have a process (goes up to 250+ MB, cue lots of virtual memory
> usage and hence slowdown), which busies the server so much that
> it just can't bring itself to do other things -- in other words
> it's severely impractical.
> The actual data that results from the process is relatively small
> (the useful stuff is less than 1MB, and other data -- such as
> objects that have been dealt with, to prevent following circular
> references -- is no more than a few MBs), and certainly well
> below the size of the actual footprint.
> I've gone through the loops and recursion with a fine-tooth comb,
> and found that by eliminating certain function calls the
> footprint is reduced -- the process is thus rendered useless
> (they were the important functions, of course), but at least I
> get an idea of where the memory is going.
> On the other hand, commenting out some other expressions (such as
> those that remember data -- in other words, the things that I
> would expect to take up memory and not release it again) doesn't
> make a blind bit of difference.

That's a pretty good sign that you have circular references that 
are keeping objects alive. Check out Cyclops from

Also, be aware that even if your python objects are getting 
collected, that doesn't mean your c runtime is giving the 
memory back to the OS. Although 250M would indicate a 
pretty horrid c runtime.
> So what's going on?  My guesses so far have been along the lines
> of refcounts not going down -- how else do I explain why memory
> that was allocated was seemingly not deallocated?  I've tried
> del'ing everything in sight, storing ids instead of objects,
> inserting the odd sys.exc_traceback=None to clear those pesky
> traceback reference holders ... none of it has helped so far.
> Are there other hidden things that might hold references to my
> objects?  And if so, what can I do about them?  After using an
> object once, I don't want (don't need) to keep it in memory any
> more, so for those objects I'll be revisiting at a later date, I
> just remember their id and delete them.

In your code, dict1 is holding a reference to every object 
you've visited, and thus keeping them all alive. OTOH, judging 
by "for item in obj.subs:", it looks like the whole graph is alive 
all the time anyway. Essentially, you've got a huge pointer-
based data structure. You need to hoist most of the smarts up 
to a Graph object, and use some kind of persistent key to 
identify objects (which are really on disk, or recreated on the 
fly...). So that snippet might be:
  for itemname in obj.subs:
    obj = graph.getobject(itemname)
> If it's any help, the process does something like this:
> def recurse(obj, list, dict1, dict2):
>   # Prevent circular recursion...
>   if dict1.has_key(obj.__id__):
>     return
>   else:
>     dict1[obj.__id__] = None
>   a = do_thing(obj, dict2)
>   # Recurse into obj's children, or do something else
>   for item in obj.subs:
>     if item.spam:
>       recurse(item, list, dict1, dict2)
>     else:
>       b = do_thing(item, dict2)
>   c = do_thing(obj, dict2)
>   # Remember some things -- commenting
>   # out produces no memory saving.
>   list.append(a,b,c)
> The function do_thing implements a loop; in total (over the whole
> process) the loop's body is executed tens of thousands of times
> -- it causes the biggest memory saving when commented out (and of
> course it's the most important bit), but does no obvious
> "remembering" of things.
> Does anybody have any ideas?  Or can you maybe point me to some
> documentation that deals with this -- the FAQ looked promising
> but ultimately didn't help.
> Cheers,
> Steve
> --
>        -- Steve Tregidgo --
> Developer for Business Collaborator
> Sent via
> Before you buy.
> -- 

- Gordon

More information about the Python-list mailing list