Wed, 01 May 2002 01:19:09 -0400
> Hmm! I bet we could go a long away leaving gc out of this entirely:
> BUILD_TUPLE in ceval.c could check the types of the objects it
> stuffs into a new tuple, and untrack it immediately then if it were safe
> to do so. And tupleconcat() could mindlessly inherit trackedness from
> the logical "or" of its inputs' trackedness. Would that get 80% of the
> potential benefit with 5% of the potential work?
This looks promising, but needs more work.
First off, BUILD_TUPLE gets surprised by the empty tuple: after untracking
it once, it gets back the same empty tuple later, and blows up when trying
to untrack it again. So I had to break into the GC abstraction to avoid
untracking when something was already untracked. That test is a
heartbreaker, since it's only the empty tuple that's a potential problem
there. Probably better to have tupleobject.c do something special.
After fiddling builtin_zip to untrack the tuples it creates when that's safe
(untrack is safe == for each slot s in the tuple, either s is not of a
gcable type, or s is of a gcable type but s is already untracked), the
justzip() test showed a wonderful surprise: it took *forever*!
This is cute: _PyObject_GC_UNTRACK() doesn't decrease gc's "excess
allocation" count, but tuples keep getting created. After "the list" makes
it into gen2, the excess allocation count just keeps growing, and
collect_generations() keeps getting called. But there's nothing *in* gen0
or gen1 anymore (the tuples get untracked), and collect_generations()
neither calls collect(), nor sets allocations back to 0 itself, when a gen0
or gen1 collection occurs and the gen0 & gen1 lists are empty.
The upshoot is that "allocated" is (almost) always greater than threshold0,
so collect_generations() gets called *every* time a tuple gets allocated.
Every 10th tuple allocation a useless null gen1 collection gets done then,
and about about every 100th a gen2 collection gets done. The tuples aren't
getting tracked anymore, but the list containing them is, and it crawls over
all the list slots each time a gen2 collection gets done (and that happens
about 700x more frequently than it's supposed to).
So that turned out to be quadratic-time with extreme prejudice <wink>.
Anyway, setting 'allocated' back to 0 at the end of collect_generations
cured that, and untracking the tuples did do some good: the runtime of
justzip() fell by more than a factor of 2. The gen2 traversals of "the
list" still slowed it down considerably, though (over what happens when gc
So, if we want to pursue this, perhaps what gc really wants to count is the
excess of gc tracks over gc untracks. Right now they're strongly correlated
with gc allocations and deallocations, respectively. But if we started
untracking "safe" tuples, that connection would be broken. If we did exempt
"safe tuples" from gc, and counted the excess of tracks over untracks, gc
would never trigger in justzip(). So that's the attraction.
I think that much would be principled, since I view the "excess" statistic
as trying to guess how many objects are sitting in gen0, triggering a round
of cycle gc when gen0 gets big enough; track-untrack is a more direct
measure of gen0's size than allocated-deallocated.