[Python-Dev] finalization again
Guido van Rossum
Thu, 09 Mar 2000 14:55:23 -0500
[Tim describes a more formal approach based on maximal strongly
connected components (SCCs).]
I like the SCC approach -- it's what I was struggling to invent but
came short of discovering.
> > What's left on the first list then consists of finalizer-free garbage.
> > We dispose of this garbage by clearing dicts and lists. Hopefully
> > this makes the refcount of some of the finalizers go to zero -- those
> > are finalized in the normal way.
> In Python it's even possible for a finalizer to *install* a __del__ method
> that didn't previously exist, into the class of one of the objects on your
> "first list". The scheme above is meant to be bulletproof in the face of
> abuses even I can't conceive of <wink>.
Are you *sure* your scheme deals with this? Let's look at an example.
(Again, lowercase nodes have no finalizers.) Take G:
a <=> b -> C
This is G' (a and b are strongly connected):
a' -> C'
C is not reachable from any root node. We decide to clear a and b.
Let's suppose we happen to clear b first. This removes the last
reference to C, C's finalizer runs, and it installs a finalizer on
a.__class__. So now a' has turned into A', and we're halfway
committing a crime we said we would never commit (touching cyclical
trash with finalizers).
I propose to disregard this absurd possibility, except to the extent
that Python shouldn't crash -- but we make no guarantees to the user.
> More mundanely, clearing an item on your first list can cause a chain of
> events that runs a finalizer, which in turn can resurrect one of the objects
> on your first list (and so it should *not* get reclaimed). Without doing
> the SCC bit, I don't think you can out-think that (the reasoning above
> showed that the finalizer can't resurrect something in the *same* SCC as the
> object that started it all, but that argument cannot be extended to objects
> in other safe SCCs: they're vulnerable).
I don't think so. While my poor wording ("finalizer-free garbage")
didn't make this clear, my references to earlier algorithms were
intended to imply that this is garbage that consists of truly
unreachable objects. I have three lists: let's call them T(rash),
R(oot-reachable), and F(inalizer-reachable). The Schemenauer
c.s. algorithm moves all reachable nodes to R. I then propose to move
all finalizers to F, and to run another pass of Schemenauer c.s. to
also move all finalizer-reachable (but not root-reachable) nodes to F.
I truly believe that (barring the absurdity of installing a new
__del__) the objects on T at this point cannot be resurrected by a
finalizer that runs, since they aren't reachable from any finalizers:
by virtue of Schemenauer c.s. (which computes a reachability closure
given some roots) anything reachable from a finalizer is on F by now
(if it isn't on R -- again, nothing on T is reachable from R, because
R is calculated a closure).
So, unless there's still a bug in my thinking here, I think that as
long as we only want to clear SCCs with 0 finalizers, T is exactly the
set of nodes we're looking for.
> This time the echo came back distorted <wink>:
> Cycles involving one or more finalizable objects are never finalized.
> A<=>b is "a cycle involving one or more finalizable objects", so he won't
> touch it. The scheme at the top doesn't either. If you handed him your
> *derived* graph (but also without the self-loops), he would; me too. KISS!
> > Note that we're now calling finalizers on objects with a non-zero
> > refcount.
> I don't know why you want to do this. As the next several paragraphs
> confirm, it creates real headaches for the implementation, and I'm unclear
> on what it buys in return. Is "we'll do something by magic for cycles with
> no more than one finalizer" a major gain for the user over "we'll do
> something by magic for cycles with no finalizer"? 0, 1 and infinity *are*
> the only interesting numbers <wink>, but the difference between 0 and 1
> *here* doesn't seem to me worth signing up for any pain at all.
I do have a reason: if a maximal SCC has only one finalizer, there can
be no question about the ordering between finalizer calls. And isn't
the whole point of this discussion to have predictable ordering of
finalizer calls in the light of trash recycling?
> I would have no objection to "__del__ called only once" if it weren't for
> that Python currently does something different. I don't know whether people
> rely on that now; if they do, it's a much more dangerous thing to change
> than adding a new keyword (the compiler gives automatic 100% coverage of the
> latter; but nothing mechanical can help people track down reliance-- whether
> deliberate or accidental --on the former).
> But none of this self-sampling is going to comfort some guy in France who
> has a megaline of code relying on it. Good *bet*, though <wink>.
OK -- so your objection is purely about backwards compatibility.
Apart from that, I strongly feel that the only-once rule is a good
one. And I don't think that the compatibility issue weighs very
strongly here (given all the other problems that typically exist with
> I see Marc-Andre already declined to get sucked into the magical part of
> this <wink>. Greg should speak for his scheme, and I haven't made time to
> understand it fully; my best guess is to call x.__cleanup__ for every object
> in the SCC (but there's no clear way to decide which order to call them in,
> and unless they're more restricted than __del__ methods they can create all
> the same problems __del__ methods can!).
Yes, but at least since we're defining a new API (in a reserved
portion of the method namespace) there are no previous assumptions to
> > Note that I'd like some implementation freedom: I may not want to
> > bother with the graph reduction algorithm at first (which seems very
> > hairy) so I'd like to have the right to use the __cleanup__ API
> > as soon as I see finalizers in cyclical trash. I don't mind disposing
> > of finalizer-free cycles first, but once I have more than one
> > finalizer left in the remaining cycles, I'd like the right not to
> > reduce the graph for topsort reasons -- that algorithm seems hard.
> I hate to be realistic <wink>, but modern GC algorithms are among the
> hardest you'll ever see in any field; even the outer limits of what we've
> talked about here is baby stuff. Sun's Java group (the one in Chelmsford,
> MA, down the road from me) had a group of 4+ people (incl. the venerable Mr.
> Steele) working full-time for over a year on the last iteration of Java's
> GC. The simpler BDW is a megabyte of code spread over 100+ files. Etc --
> state of the art GC can be crushingly hard.
> So I've got nothing against taking shortcuts at first -- there's actually no
> realistic alternative. I think we're overlooking the obvious one, though:
> if any finalizer appears in any trash cycle, tough luck. Python 3000 --
> which may be a spelling of 1.7 <wink>, but doesn't *need* to be a spelling
> of 1.6.
Kind of sad though -- finally knowing about cycles and then not being
able to do anything about them.
> > So we're back to the __cleanup__ design. Strawman proposal: for all
> > finalizers in a trash cycle, call their __cleanup__ method, in
> > arbitrary order. After all __cleanup__ calls are done, if the objects
> > haven't all disposed of themselves, they are all garbage-collected
> > without calling __del__. (This seems to require another garbage
> > colelction cycle -- so perhaps there should also be a once-only rule
> > for __cleanup__?)
> > Separate question: what if there is no __cleanup__? This should
> > probably be reported: "You have cycles with finalizers, buddy! What
> > do you want to do about them?" This same warning could be given when
> > there is a __cleanup__ but it doesn't break all cycles.
> If I *ever* have a trash cycle with a finalizer in my code (> 0 -- "exactly
> 1" isn't special to me), I will consider it to be a bug. So I want a way to
> get it back from gc, so I can see what the heck it is, so I can fix my code
> (or harass whoever did it to me). __cleanup__ suffices for that, so the
> very act of calling it is all I'm really after ("Python invoked __cleanup__
> == Tim has a bug").
> But after I outgrow that <wink>, I'll certainly want the option to get
> another kind of complaint if __cleanup__ doesn't break the cycles, and after
> *that* I couldn't care less. I've given you many gracious invitations to
> say that you don't mind leaking in the face of a buggy program <wink>, but
> as you've declined so far, I take it that never hearing another gripe about
> leaking is a Primary Life Goal. So collection without calling __del__ is
> fine -- but so is collection with calling it! If we're going to (at least
> implicitly) approve of this stuff, it's probably better *to* call __del__,
> if for no other reason than to catch your case of some poor innocent object
> caught in a cycle not of its making that expects its __del__ to abort
> starting World War III if it becomes unreachable <wink>.
I suppose we can print some obnoxious message to stderr like
"""Your program has created cyclical trash involving one or more
objects with a __del__ method; calling their __cleanup__ method didn't
resolve the cycle(s). I'm going to call the __del__ method(s) but I
can't guarantee that they will be called in a meaningful order,
because of the cyclical dependencies."""
But I'd still like to reclaim the memory. If this is some
long-running server process that is executing arbitrary Python
commands sent to it by clients, it's not nice to leak, period.
(Because of this, I will also need to trace functions, methods and
modules -- these create massive cycles that currently require painful
cleanup. Of course I also need to track down all the roots
--Guido van Rossum (home page: http://www.python.org/~guido/)