[Patches] Fix for memory leak in _PyResize_Tuple
Sun, 16 Apr 2000 23:31:39 -0400 (EDT)
>>>>> "CGW" == Charles G Waldman <email@example.com> writes:
CGW> Jeremy Hylton writes:
>> Question about this patch: Is it always more efficient to re-use
>> a tuple (and update all the ob_item pointers)? Or are there
>> cases where it would be more efficient to realloc and avoid the
>> cost of updating the pointers?
CGW> According to Webster's, "efficient" has two meanings: 1:
CGW> immediately effecting 2: productive of desired effects; esp :
CGW> productive without waste
CGW> Realloc might be (?) more efficient in sense 1 (speed), but
CGW> certainly not in sense 2 (waste). It just seems ecologically
CGW> wrong to be keeping this heap of used tuples around for
CGW> recycling, then going to the store and buying a new one when
CGW> your old one doesn't fit any more, instead of looking on the
CGW> recycling heap first. If speed is a great concern, the
CGW> updating of the ob_item pointers could be done with a memcpy (I
CGW> thought about doing it this way but the for-loop seemed
The list of free tuples is, I assume, a speed optimization. If speed
were not an issue, it would be simplest by far to use malloc and free
each time. If the free list is a speed optimization, it seems
sensible to consider that issue.
I think your proposed patch looks reasonable, but I didn't know what
values to use for the cost model. (How frequently does realloc move
the pointer? How much does that cost? How much does the pointer
update cost?) Performance may not be a crucial issue here, since
_PyTuple_Resize is used three times in the Python core, but seems
>> There's another problem with the current strategy for reusing
>> tuples that we might fix at the same time. If a program creates
>> a large number of tuples, later releases them all, and never
>> creates any more tuples of the same size, that memory will live
>> forever on the free list. If I have a program that creates a
>> million 3-tuples at one blow, that memory will never get freed.
CGW> Making a limit on the size of the recycling heap is a different
CGW> issue and maybe that should be done as well, but I don't think
CGW> that's a reason not to fix _PyTuple_Resize.
It's a related issue. If the cost of the patch is much greater than
the cost of realloc (not saying that it is), then it may be worth
leaving realloc alone.
CGW> I thought about adding this too, but chose not to for the
CGW> following 4 reasons: (1) I have to choose an arbitrary value
CGW> for the cut-off and I don't like to impose arbitrary limits.
CGW> Do you want to keep 42 free tuples? 137? 1024? Somebody else
CGW> come up with the number and I'll put that in too, except that
Yes. Arbitrary constants are worrisome, but I think not worrisome
enough to leave 2,000,000 unused 10-tuples on the free list
CGW> (2) I am not really 100% sure that the behavior you describe is
CGW> a problem. Maybe the function that creates the million
CGW> 3-tuples will run again and need to create another million
CGW> 3-tuples and having them all cached will be advantageous. Or
I think it's a bug because it is a surprising effect. If you create
and then free a lot of tuples, it seems unfortunate that a speed
optimization causes memory utilization to stay high.
CGW> maybe not. But (4) I have been thinking about memory leaks and
CGW> "ecology" and it seems to me to be a desirable property that
CGW> you be able to call a function f() then call it again and again
CGW> and not use up more memory on each call. (Of course I'm
CGW> assuming you use the same parameters and f isn't deliberately
CGW> constructed to consume memory...) To me, continuing to leak a
CGW> little bit of memory on each call is worse than doing a huge
CGW> alloc one time.... it's OK if the high-water mark goes up way
CGW> high and doesn't recede, as long as it doesn't keep inching up.
I'm not sure I'm convinced. I don't think it matters who the tank
fills up (a little at a time or all at once), if the program doesn't
need the memory it should remain in use because of some detail of the
CGW> When I iterate the tests in the Python Lib/test package and
CGW> plot the interpreter's memory footprint vs. iteration number, I
CGW> see two distinct patterns - a monotone increasing ramp, or a
CGW> "mesa". If I see the ramp, I know right away that there is
CGW> leaky behavior, and I can try to fix it. (Right now I see this
CGW> with the modules test_cpickle, test_extcall, test_nis, and
CGW> test_unicode, and I hope to fix all of these leaks, time
>> It should be possible to fix this problem by keeping track of the
>> size of each freelist and never letting it exceed some maximum
>> size. If it actually ends up being more efficient to realloc
>> than copy, then this patch would limit the amount of loss caused
>> by oddball cases like calling PyObject_Sequence on an instance
>> that overstates its size.
CGW> Yes, it's an oddball case, but it came from Python's own test
CGW> suite! (I'm not devious enough to have come up with this one,
CGW> I don't think).
I'll take that as a compliment! I was feeling devious when I wrote it
CGW> Ideally, no matter what you throw at it, you
CGW> should be able to get Python to exhibit signs of insidious
CGW> memory leaks, especially not running the built-in test suite.
CGW> "Limiting the loss" is less elegant than not allowing it to
CGW> happen in the first place. If the limit was 1024 I could run
CGW> the "text_extcall" test 1000 times and think I saw a memory
I think "limiting the loss" is a less elegant solution to the problem
you discovered, but I think the second problem is worth solving, too.