"static" data descriptors and possibly spurious calls to __set__?

John Perks and Sarah Mount johnandsarah at estragon.freeserve.co.uk
Sun Jun 19 16:58:52 CEST 2005

I have a peculiar problem that, naturally, I cannot reproduce outside
the fairly long-winded code that spawned it. Thus the code snippets here
should be taken as shorthand for what I think are the relevant bits,
without actually leading to the erroneous (or at least undesired)
behaviour described.

The problem is that, in attempting to add an attribute to a subclass, a
synonymous attribute in the base
class is looked up instead, and its __set__ method invoked. The actual
setting is done via type.__setattr__(cls, attr, value).

The question is, under what circumstances can an attempt to add an
attribute to a derived class be interpreted as setting a descriptor of
the same name in the base class, and how (under those circumstances) can
one coax the desired behaviour out of the interpreter? It's no good just
stipulating that the derived classes are populated first to avoid this
sort of name-shadowing, as new classes will be dynamically discovered by
the user, and anyway existing classes can be repopulated by the user at
any time.

This slightly contrived example comes about in part from a need to have
"static" data descriptors, where an assignment to the descriptor via the
class (as opposed to via an instance), result in a call to its __set__,
rather than replacing the descriptor in the class's __dict__. (If you
really wanted to replace the descriptor, you'd have to del it explicitly
assigning to the attribute, and this is in fact done elsewhere in the
code where needed.) I do this by having a special metaclass whose
__setattr__ intercepts such calls:

class Meta(type):
    def __setattr__(self, attr, value):
        if attr not in self.__dict__ or isSpecial(attr):
            # the usual course of action
            type.__setattr__(self, attr, value)
            self.__dict__[attr].__set__(None, value)

(The isSpecial() function checks for __special_names__ and other things
that should be hadled as normal). I've subsequently found that
applications such as jpype have successfully used this approach, so it
doesn't seem to be a bad idea in itself.

Then, if we have a descriptor...

class Descr(object):
    def __get__(self, inst, owner=None):
        print "Getting!", self, inst, owner
        return 17
    def __set__(self, inst, value):
        print "Setting", self, inst, value

...and then we have a (dynamically-created) class heirarchy...

Base = Meta('Base', (), {})
Derived = Meta('Derived', (Base,), {})

...then we (dynamically) assign a static descriptor to Base...

setattr(Base, 'x', Descr())

...and then to Derived...

setattr(Derived, 'x', Descr())

...it's this last one that causes the problem. In the real code, the
call to type.__setattr__ referred to above seems to lead to a call to
something like cls.__base__.__dict__[attr].__set__(cls, value). It
doesn't appear possible to recover from this from within the
descriptor's __set__ itself, as any attempt to set the class attribute
from there naturally leads to recursion. I can't set the class's
__dict__ directly, as it's a dictproxy and only exposes a read-only
interface (why is that, by the way?).

One further factor that may be relevant (though I couldn't see why), is
that the metaclass in question is set up (via __class__ reassignment) to
be an instance of itself, like the builtin type.

Any enlightenment on this matter would be gratefully received.



More information about the Python-list mailing list