[Python-Dev] Design question: call __del__ only after successful __init__?
Guido van Rossum
Fri, 03 Mar 2000 11:13:16 -0500
> > I was looking at the code that invokes __del__, with the intent to
> > implement a feature from Java: in Java, a finalizer is only called
> > once per object, even if calling it makes the object live longer.
> Why? That is, in what way is this an improvement over current behavior?
> Note that Java is a bit subtle: a finalizer is only called once by magic;
> explicit calls "don't count".
Of course. Same in my proposal. But I wouldn't call it "by magic" --
just "on behalf of the garbage collector".
> The Java rules add up to quite a confusing mish-mash. Python's rules are
> *currently* clearer.
I don't find the Java rules confusing. It seems quite useful that the
GC promises to call the finalizer at most once -- this can simplify
the finalizer logic. (Otherwise it may have to ask itself, "did I
clean this already?" and leave notes for itself.) Explicit finalizer
calls are always a mistake and thus "don't count" -- the response to
that should in general be "don't do that" (unless you have
particularly stupid callers -- or very fearful lawyers :-).
> I deal with possible exceptions in Python constructors the same way I do in
> C++ and Java: if there's a destructor, don't put anything in __init__ that
> may raise an uncaught exception. Anything dangerous is moved into a
> separate .reset() (or .clear() or ...) method. This works well in practice.
Sure, but the rule "if __init__ fails, __del__ won't be called" means
that we don't have to program our __init__ or __del__ quite so
defensively. Most people who design a __del__ probably assume that
__init__ has run to completion. The typical scenario (which has
happened to me! And I *implemented* the damn thing!) is this:
__init__ opens a file and assigns it to an instance variable; __del__
closes the file. This is tested a few times and it works great. Now
in production the file somehow unexpectedly fails to be openable.
Sure, the programmer should've expected that, but she didn't. Now, at
best, the failed __del__ creates an additional confusing error
message on top of the traceback generated by IOError. At worst, the
failed __del__ could wreck the original traceback.
Note that I'm not proposing to change the C level behavior; when a
Py<Object>_New() function is halfway its initialization and decides to
bail out, it does a DECREF(self) and you bet that at this point the
<object>_dealloc() function gets called (via
self->ob_type->tp_dealloc). Occasionally I need to initialize certain
fields to NULL so that the dealloc() function doesn't try to free
memory that wasn't allocated. Often it's as simple as using XDECREF
instead of DECREF in the dealloc() function (XDECREF is safe when the
argument is NULL, DECREF dumps core, saving a load-and-test if you are
sure its arg is a valid object).
> > To implement this, we need a flag in each instance that means "__del__
> > was called".
> At least <wink>.
> > I opened the creation code for instances, looking for the right place
> > to set the flag. I then realized that it might be smart, now that we
> > have this flag anyway, to set it to "true" during initialization. There
> > are a number of exits from the initialization where the object is created
> > but not fully initialized, where the new object is DECREF'ed and NULL is
> > returned. When such an exit is taken, __del__ is called on an
> > incompletely initialized object!
> I agree *that* isn't good. Taken on its own, though, it argues for adding
> an "instance construction completed" flag that __del__ later checks, as if
> its body were:
> if self.__instance_construction_completed:
> That is, the problem you've identified here could be addressed directly.
Sure -- but I would argue that when __del__ returns,
__instance_construction_completed should be reset to false, because
the destruction (conceptually, at least) cancels out the construction!
> > Now I have a choice to make. If the class has an __init__, should I
> > clear the flag only after __init__ succeeds? This means that if
> > __init__ raises an exception, __del__ is never called. This is an
> > incompatibility. It's possible that someone has written code that
> > relies on __del__ being called even when __init__ fails halfway, and
> > then their code would break.
> > But it is just as likely that calling __del__ on a partially
> > uninitialized object is a bad mistake, and I am doing all these cases
> > a favor by not calling __del__ when __init__ failed!
> > Any opinions? If nobody speaks up, I'll make the change.
> I'd be in favor of fixing the actual problem; I don't understand the point
> to the rest of it, especially as it has the potential to break existing code
> and I don't see a compensating advantage (surely not compatibility w/
> JPython -- JPython doesn't invoke __del__ methods at all by magic, right?
> or is that changing, and that's what's driving this?).
JPython's a red herring here.
I think that the proposed change probably *fixes* much morecode that
is subtly wrong than it breaks code that is relying on __del__ being
called after a partial __init__. All the rules relating to __del__
are confusing (e.g. what __del__ can expect to survive in its
Also note Ping's observation:
| If it's up to the implementation of __del__ to deal with a problem
| that happened during initialization, you only know about the problem
| with very coarse granularity. It's a pain (or even impossible) to
| then rediscover the information you need to recover adequately.
--Guido van Rossum (home page: http://www.python.org/~guido/)