Instead of collecting objects after a fixed number of allocations (700) You could make it dynamic like this: # initial values min_allocated_memory = 0 max_allocated_memory = 0 next_gc_run = 1024 * 1024 def manage_memory(): allocated_memory = get_allocated_memory() min_allocated_memory = min(min_allocated_memory, allocated_memory) max_allocated_memory = max(max_allocated_memory, allocated_memory) if max_allocated_memory - min_allocated_memory > next_gc_run: # run the gc memory_freed, allocated_memory = run_gc() next_gc_run = max( allocated_memory * 1.5 - memory_freed, 1024 * 1024 ) min_allocated_memory = allocated_memory max_allocated_memory = allocated_memory manage_memory() should be called after every allocation and after a ref count of an object reaches 0 (memory is freed) Expected behaviours: => As less objects contain cyclic references as less often the GC will run (memory_freed is small) => As more objects contain cyclic references as more often the GC will run (memory_freed is large) => If memory utiliaziation grows fast (burst allocations) GC will run less often: next_gc_run = allocated_memory * 1.5 - memory_freed ... Of course the constants: 1.5 and 1024 * 1024 are only suggestions... - Ralf
At 11:28 PM +0200 6/21/08, none wrote:
Instead of collecting objects after a fixed number of allocations (700) ...
I've seen this asserted several times in this thread: that GC is done every fixed number of allocations. This is not correct. GC is done when the surplus of allocations less deallocations exceeds a threashold. See Modules/gcmodule.c and look for ".count++" and ".count--". In normal operation, allocations and deallocations stay somewhat balanced, but when creating a large data structure, it's allocations all the way and GC runs often. -- ____________________________________________________________________ TonyN.:' mailto:tonynelson@georgeanelson.com ' http://www.georgeanelson.com/
participants (2)
-
none
-
Tony Nelson