
On Mon, Dec 26, 2005 at 08:22:24PM -0500, Bob Ippolito wrote:
Take a look at: http://docs.python.org/lib/module-gc.html
You can adjust the threshold for each GC generation to suit your application better.
gc.set_threshold(1, 1, 1) fixed it too, any other setting didn't (sometime it increases to 100m). If (1,1,1) is as good as it can get, I'll keep doing the gc.collect() during the factory restart since the 50m allocation only happens after the connectionMade callback and never again in the context of any given protocol. So the gc.collect() seems the optimal fix to me for now. I wish the size of the task would be taken into account any way in the threshold tunables. I'd like to say "gc.set_mem_threshold({30*1024*1024 : (10, 10, 10), 50*1024*1024 : (1, 1, 1),})", which mean it's a dynamic threshold. It should be possible to implement this in O(1), the interpreter should easily track how much anonymous memory it has allocated with malloc at any given time. The more anonymous memory, the less generations it should wait. It could be a linear function too. However for now I'm happy with the gc.collect(). Thanks a lot for all help.