Let's get rid of unbound methods

In my blog I wrote: Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code) and sometimes you have to work around it; it complicates function attribute access; and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing. This is a really simple change to begin with: *** funcobject.c 28 Oct 2004 16:32:00 -0000 2.67 --- funcobject.c 4 Jan 2005 18:23:42 -0000 *************** *** 564,571 **** static PyObject * func_descr_get(PyObject *func, PyObject *obj, PyObject *type) { ! if (obj == Py_None) ! obj = NULL; return PyMethod_New(func, obj, type); } --- 564,573 ---- static PyObject * func_descr_get(PyObject *func, PyObject *obj, PyObject *type) { ! if (obj == NULL || obj == Py_None) { ! Py_INCREF(func); ! return func; ! } return PyMethod_New(func, obj, type); } There are some test suite failures but I suspect they all have to do with checking this behavior. Of course, more changes would be needed: docs, the test suite, and some simplifications to the instance method object implementation in classobject.c. Does anyone think this is a bad idea? Anyone want to run with it? -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
In my blog I wrote:
Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code) and sometimes you have to work around it; it complicates function attribute access;
I think this is probably a good thing as it potentially avoids some unintential aliasing.
and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing.
duck typing?
This is a really simple change to begin with:
*** funcobject.c 28 Oct 2004 16:32:00 -0000 2.67 --- funcobject.c 4 Jan 2005 18:23:42 -0000 *************** *** 564,571 **** static PyObject * func_descr_get(PyObject *func, PyObject *obj, PyObject *type) { ! if (obj == Py_None) ! obj = NULL; return PyMethod_New(func, obj, type); }
--- 564,573 ---- static PyObject * func_descr_get(PyObject *func, PyObject *obj, PyObject *type) { ! if (obj == NULL || obj == Py_None) { ! Py_INCREF(func); ! return func; ! } return PyMethod_New(func, obj, type); }
There are some test suite failures but I suspect they all have to do with checking this behavior.
Of course, more changes would be needed: docs, the test suite, and some simplifications to the instance method object implementation in classobject.c.
Does anyone think this is a bad idea?
It *feels* very disruptive to me, but I'm probably wrong. We'll still need unbound builtin methods, so the concept won't go away. In fact, the change would mean that the behavior between builtin methods and python methods would become more inconsistent. Jim -- Jim Fulton mailto:jim@zope.com Python Powered! CTO (540) 361-1714 http://www.python.org Zope Corporation http://www.zope.com http://www.zope.org

On Tue, Jan 04, 2005, Jim Fulton wrote:
Guido van Rossum wrote:
and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing.
duck typing?
"If it looks like a duck and quacks like a duck, it must be a duck." Python is often referred to as having duck typing because even without formal interface declarations, good practice mostly depends on conformant interfaces rather than subclassing to determine an object's type. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "19. A language that doesn't affect the way you think about programming, is not worth knowing." --Alan Perlis

At 01:36 PM 1/4/05 -0500, Jim Fulton wrote:
duck typing?
AKA latent typing or, "if it walks like a duck and quacks like a duck, it must be a duck." Or, more pythonically: if hasattr(ob,"quack") and hasattr(ob,"duckwalk"): # it's a duck This is as distinct from both 'if isinstance(ob,Duck)' and 'if implements(ob,IDuck)'. That is, "duck typing" is determining an object's type by inspection of its method/attribute signature rather than by explicit relationship to some type object.

[Guido van Rossum]
Let's get rid of unbound methods.
+1 [Jim Fulton]
duck typing?
Requiring a specific interface instead of a specific type. [Guido]
Does anyone think this is a bad idea? [Jim] It *feels* very disruptive to me, but I'm probably wrong. We'll still need unbound builtin methods, so the concept won't go away. In fact, the change would mean that the behavior between builtin methods and python methods would become more inconsistent.
The type change would be disruptive and guaranteed to break some code. Also, it would partially breakdown the distinction between functions and methods. The behavior, on the other hand, would remain essentially the same (sans type checking). Raymond

[Jim]
We'll still need unbound builtin methods, so the concept won't go away. In fact, the change would mean that the behavior between builtin methods and python methods would become more inconsistent.
Actually, unbound builtin methods are a different type than bound builtin methods:
type(list.append) <type 'method_descriptor'> type([].append) <type 'builtin_function_or_method'>
Compare this to the same thing for a method on a user-defined class:
type(C.foo) <type 'instancemethod'> type(C().foo) <type 'instancemethod'>
(The 'instancemethod' type knows whether it is a bound or unbound method by checking whether im_self is set.) [Phillip]
Code that currently does 'aClass.aMethod.im_func' in order to access the function object would break, as would code that inspects 'im_self' to determine whether a method is a class or instance method. (Although code of the latter sort would already break with static methods, I suppose.)
Right. (But I think you're using the terminology in a cunfused way -- im_self distinguishes between bould and unbound methods. Class methods are a different beast.) I guess for backwards compatibility, function objects could implement dummy im_func and im_self attributes (im_func returning itself and im_self returning None), while issuing a warning that this is a deprecated feature. [Tim]
Really? Unbound methods are used most often (IME) to call a base-class method from a subclass, like my_base.the_method(self, ...). It's especially easy to forget to write `self, ` there, and the exception msg then is quite focused because of that extra bit of type checking. Otherwise I expect we'd see a more-mysterious AttributeError or TypeError when the base method got around to trying to do something with the bogus `self` passed to it.
Hm, I hadn't thought ot this.
I could live with that, though.
Most cases would be complaints about argument counts (it gets harier when there are default args so the arg count is variable). Ironically, I get those all the time these days due to the reverse error: using super() but forgetting *not* to pass self!
Across the Python, Zope2 and Zope3 code bases, types.UnboundMethodType is defined once and used once (believe it or not, in unittest.py).
But that might be because BoundMethodType is the same type object... -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
[Jim]
We'll still need unbound builtin methods, so the concept won't go away. In fact, the change would mean that the behavior between builtin methods and python methods would become more inconsistent.
Actually, unbound builtin methods are a different type than bound builtin methods:
Of course, but conceptually they are similar. You would still encounter the concept if you got an unbound builtin method. Jim -- Jim Fulton mailto:jim@zope.com Python Powered! CTO (540) 361-1714 http://www.python.org Zope Corporation http://www.zope.com http://www.zope.org

Hi Jim, On Tue, Jan 04, 2005 at 02:44:43PM -0500, Jim Fulton wrote:
Actually, unbound builtin methods are a different type than bound builtin methods:
Of course, but conceptually they are similar. You would still encounter the concept if you got an unbound builtin method.
There are no such things as unbound builtin methods:
list.append is list.__dict__['append'] True
In other words 'list.append' just returns exactly the same object as stored in the list type's dict. Guido's proposal is to make Python methods behave in the same way. Armin

Armin Rigo wrote:
Hi Jim,
On Tue, Jan 04, 2005 at 02:44:43PM -0500, Jim Fulton wrote:
Actually, unbound builtin methods are a different type than bound builtin methods:
Of course, but conceptually they are similar. You would still encounter the concept if you got an unbound builtin method.
There are no such things as unbound builtin methods:
list.append is list.__dict__['append']
True
In other words 'list.append' just returns exactly the same object as stored in the list type's dict. Guido's proposal is to make Python methods behave in the same way.
OK, interesting. I'm sold then. Jim -- Jim Fulton mailto:jim@zope.com Python Powered! CTO (540) 361-1714 http://www.python.org Zope Corporation http://www.zope.com http://www.zope.org

At 11:40 AM 1/4/05 -0800, Guido van Rossum wrote:
[Jim]
We'll still need unbound builtin methods, so the concept won't go away. In fact, the change would mean that the behavior between builtin methods and python methods would become more inconsistent.
Actually, unbound builtin methods are a different type than bound builtin methods:
type(list.append) <type 'method_descriptor'> type([].append) <type 'builtin_function_or_method'>
Compare this to the same thing for a method on a user-defined class:
type(C.foo) <type 'instancemethod'> type(C().foo) <type 'instancemethod'>
(The 'instancemethod' type knows whether it is a bound or unbound method by checking whether im_self is set.)
[Phillip]
Code that currently does 'aClass.aMethod.im_func' in order to access the function object would break, as would code that inspects 'im_self' to determine whether a method is a class or instance method. (Although code of the latter sort would already break with static methods, I suppose.)
Right. (But I think you're using the terminology in a cunfused way -- im_self distinguishes between bould and unbound methods. Class methods are a different beast.)
IIUC, when you do 'SomeClass.aMethod', if 'aMethod' is a classmethod, then you will receive a bound method with an im_self of 'SomeClass'. So, if you are introspecting items listed in 'dir(SomeClass)', this will be your only clue that 'aMethod' is a class method. Similarly, the fact that you get an unbound method object if 'aMethod' is an instance method, allows you to distinguish it from a static method (if the object is a function). That is, I'm saying that code that looks at the type and attributes of 'aMethod' as retrieved from 'SomeClass' will now not be able to distinguish between a static method and an instance method, because both will return a function instance. However, the 'inspect' module uses __dict__ rather than getattr to get at least some attributes, so it doesn't rely on this property.
I guess for backwards compatibility, function objects could implement dummy im_func and im_self attributes (im_func returning itself and im_self returning None), while issuing a warning that this is a deprecated feature.
+1 on this part if the proposal goes through. On the proposal as a whole, I'm -0, as I'm not quite clear on what this is going to simplify enough to justify the various semantic impacts such as upcalls, pickling, etc. Method objects will still have to exist, so ISTM that this is only going to streamline the "__get__(None,type)" branch of functions' descriptor code, and the check for "im_self is None" in the __call__ of method objects. (And maybe some eval loop shortcuts for calling methods?)

On Jan 4, 2005, at 1:28 PM, Guido van Rossum wrote:
Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code) and sometimes you have to work around it; it complicates function attribute access; and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing.
+1 I like this idea. It may have some effect on current versions of PyObjC though, because we really do care about what self is in order to prevent crashes. This is not a discouragement; we are already using custom descriptors and a metaclass, so it won't be a problem to do this ourselves if we are not doing it already. I'll try and find some time later in the week to play with this patch to see if it does break PyObjC or not. If it breaks PyObjC, I can sure that PyObjC 1.3 will be compatible with such a runtime change, as we're due for a refactoring in that area anyway. -bob

On Tue, Jan 04, 2005 at 10:28:03AM -0800, Guido van Rossum wrote:
In my blog I wrote:
Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code) and sometimes you have to work around it; it complicates function attribute access; and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing.
Does anyone think this is a bad idea? Anyone want to run with it?
I like the idea, it means I can get rid of this[1] func = getattr(cls, 'do_command', None) setattr(cls, 'do_command', staticmethod(func.im_func)) # don't let anyone on c.l.py see this .. or at least change the comment *grin*, -Jack [1] http://cvs.sourceforge.net/viewcvs.py/lyntin/lyntin40/sandbox/leantin/mudcom...

At 10:28 AM 1/4/05 -0800, Guido van Rossum wrote:
Of course, more changes would be needed: docs, the test suite, and some simplifications to the instance method object implementation in classobject.c.
Does anyone think this is a bad idea?
Code that currently does 'aClass.aMethod.im_func' in order to access the function object would break, as would code that inspects 'im_self' to determine whether a method is a class or instance method. (Although code of the latter sort would already break with static methods, I suppose.) Cursory skimming of the first 100 Google hits for 'im_func' seems to show at least half a dozen instances of the first type of code, though. Such code would also be in the difficult position of having to do things two ways in order to be both forward and backward compatible. Also, I seem to recall once having relied on the behavior of a dynamically-created unbound method (via new.instancemethod) in order to create a descriptor of some sort. But I don't remember where or when I did it or whether I still care. :)

Phillip J. Eby wrote:
At 10:28 AM 1/4/05 -0800, Guido van Rossum wrote:
Of course, more changes would be needed: docs, the test suite, and some simplifications to the instance method object implementation in classobject.c.
Does anyone think this is a bad idea?
Code that currently does 'aClass.aMethod.im_func' in order to access the function object would break, as would code that inspects 'im_self' to determine whether a method is a class or instance method. (Although code of the latter sort would already break with static methods, I suppose.)
Code of the latter sort wouldn't break with the change. We'd still have bound methods. Jim -- Jim Fulton mailto:jim@zope.com Python Powered! CTO (540) 361-1714 http://www.python.org Zope Corporation http://www.zope.com http://www.zope.org

[Guido]
In my blog I wrote:
Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code)
Really? Unbound methods are used most often (IME) to call a base-class method from a subclass, like my_base.the_method(self, ...). It's especially easy to forget to write `self, ` there, and the exception msg then is quite focused because of that extra bit of type checking. Otherwise I expect we'd see a more-mysterious AttributeError or TypeError when the base method got around to trying to do something with the bogus `self` passed to it. I could live with that, though.
and sometimes you have to work around it;
For me, 0 times in ... what? ... about 14 years <wink>.
it complicates function attribute access; and the overloading of unbound and bound methods on the same object type is confusing.
Yup, it is a complication, without a compelling use case I know of. Across the Python, Zope2 and Zope3 code bases, types.UnboundMethodType is defined once and used once (believe it or not, in unittest.py).

Tim Peters <tim.peters@gmail.com> wrote:
Let's get rid of unbound methods. When class C defines a method [snip] Really? Unbound methods are used most often (IME) to call a
Guido wrote: base-class method from a subclass, like my_base.the_method(self, ...). It's especially easy to forget to write `self, ` there, and the exception msg then is quite focused because of that extra bit of type checking. Otherwise I expect we'd see a more-mysterious AttributeError or TypeError when the base method got around to trying to do something with the bogus `self` passed to it.
Agreed. While it seems that super() is the 'modern paradigm' for this, I have been using base.method(self, ...) for years now, and have been quite happy with it. After attempting to convert my code to use the super() paradigm, and having difficulty, I discovered James Knight's "Python's Super Considered Harmful" (available at http://www.ai.mit.edu/people/jknight/super-harmful/ ), wherein I discovered how super really worked (I should have read the documention in the first place), and reverted my changes to the base.method version.
I could live with that, though.
I could live with it too, but I would probably use an equivalent of the following (with actual type checking): def mysuper(typ, obj): lm = list(o.__class__.__mro__) indx = lm.index(typ) if indx == 0: return obj return super(lm[indx-1], obj) All in all, I'm -0. I don't desire to replace all of my base.method with mysuper(base, obj).method, but if I must sacrifice convenience for the sake of making Python 2.5's implementation simpler, I guess I'll deal with it. My familiarity with grep's regular expressions leaves something to be desired, so I don't know how often base.method(self,...) is or is not used in the standard library. - Josiah

[Josiah]
Agreed. While it seems that super() is the 'modern paradigm' for this, I have been using base.method(self, ...) for years now, and have been quite happy with it. After attempting to convert my code to use the super() paradigm, and having difficulty, I discovered James Knight's "Python's Super Considered Harmful" (available at http://www.ai.mit.edu/people/jknight/super-harmful/ ), wherein I discovered how super really worked (I should have read the documention in the first place), and reverted my changes to the base.method version.
I think that James Y Knight's page misrepresents the issue. Quoting: """ Note that the __init__ method is not special -- the same thing happens with any method, I just use __init__ because it is the method that most often needs to be overridden in many classes in the hierarchy. """ But __init__ *is* special, in that it is okay for a subclass __init__ (or __new__) to have a different signature than the base class __init__; this is not true for other methods. If you change a regular method's signature, you would break Liskov substitutability (i.e., your subclass instance wouldn't be acceptable where a base class instance would be acceptable). Super is intended for use that are designed with method cooperation in mind, so I agree with the best practices in James's Conclusion: """ * Use it consistently, and document that you use it, as it is part of the external interface for your class, like it or not. * Never call super with anything but the exact arguments you received, unless you really know what you're doing. * When you use it on methods whose acceptable arguments can be altered on a subclass via addition of more optional arguments, always accept *args, **kw, and call super like "super(MyClass, self).currentmethod(alltheargsideclared, *args, **kwargs)". If you don't do this, forbid addition of optional arguments in subclasses. * Never use positional arguments in __init__ or __new__. Always use keyword args, and always call them as keywords, and always pass all keywords on to super. """ But that's not the same as calling it harmful. :-( -- --Guido van Rossum (home page: http://www.python.org/~guido/)

I'm not sure why super got dragged into this, but... On Jan 4, 2005, at 9:02 PM, Guido van Rossum wrote:
I think that James Y Knight's page misrepresents the issue. Quoting:
But __init__ *is* special, in that it is okay for a subclass __init__ (or __new__) to have a different signature than the base class __init__; this is not true for other methods. If you change a regular method's signature, you would break Liskov substitutability (i.e., your subclass instance wouldn't be acceptable where a base class instance would be acceptable).
You're right, some issues do apply to __init__ alone. However, two important ones do not: The issue of mixing super() and explicit calls to the superclass's method occur with any method. (Thus making it difficult/impossible for a framework to convert to using super without breaking client code that subclasses). Adding optional arguments to one branch of the inheritance tree, but not another, or adding different optional args in both branches. (breaks unless you always pass optional args as keywordargs, and all methods take **kwargs and pass that on to super).
Super is intended for use that are designed with method cooperation in mind, so I agree with the best practices in James's Conclusion: [[omitted]] But that's not the same as calling it harmful. :-(
The 'harmfulness' comes from people being confused by, and misusing super, because it is so very very easy to do so, and so very hard to use correctly. From what I can tell, it is mostly used incorrectly. *Especially* uses in __init__ or __new__. Many people seem to use super in their __init__ methods thinking that it'll magically improve something (like perhaps making multiple inheritance trees that include their class work better), only to just cause a different set of problems for multiple inheritance trees, instead, because they don't realize they need to follow those recommendations. Here's another page that says much the same thing, but from the viewpoint of recommending the use of super and showing you all the hoops to use it right: http://wiki.osafoundation.org/bin/view/Chandler/UsingSuper James PS, I wrote that page last pycon but never got around to finishing it up and therefore never really publically announced it. But I told some people about it and then they kept asking me for the URL so I linked to it, and well, then google found it of course, so I guess it's public now. ;)

The issue of mixing super() and explicit calls to the superclass's method occur with any method. (Thus making it difficult/impossible for a framework to convert to using super without breaking client code that subclasses).
Well, client classes which are leaves of the class tree can still safely use BaseClass.thisMethod(self, args) -- it's only classes that are written to be extended that must all be converted to using super(). So I'm not sure how you think your clients are breaking.
Adding optional arguments to one branch of the inheritance tree, but not another, or adding different optional args in both branches. (breaks unless you always pass optional args as keywordargs, and all methods take **kwargs and pass that on to super).
But that breaks anyway; I don't see how using the old Base.method(self, args) approach makes this easier, *unless* you are using single inheritance. If you're expecting single inheritance anyway, why bother with super()?
Super is intended for use that are designed with method cooperation in mind, so I agree with the best practices in James's Conclusion: [[omitted]] But that's not the same as calling it harmful. :-(
The 'harmfulness' comes from people being confused by, and misusing super, because it is so very very easy to do so, and so very hard to use correctly.
And using multiple inheritance the old was was not confusing? Surely you are joking.
From what I can tell, it is mostly used incorrectly. *Especially* uses in __init__ or __new__. Many people seem to use super in their __init__ methods thinking that it'll magically improve something (like perhaps making multiple inheritance trees that include their class work better), only to just cause a different set of problems for multiple inheritance trees, instead, because they don't realize they need to follow those recommendations.
If they're happy with single inheritance, let them use super() incorrectly. It works, and that's what count. Their code didn't work right with multiple inheritance before, it still doesn't. Some people just are uncomfortable with calling Base.method(self, ...) and feel super is "more correct". Let them.
Here's another page that says much the same thing, but from the viewpoint of recommending the use of super and showing you all the hoops to use it right: http://wiki.osafoundation.org/bin/view/Chandler/UsingSuper
The problem isn't caused by super but by multiple inheritance.
James
PS, I wrote that page last pycon but never got around to finishing it up and therefore never really publically announced it. But I told some people about it and then they kept asking me for the URL so I linked to it, and well, then google found it of course, so I guess it's public now. ;)
Doesn't mean you can't fix it. :) -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On Jan 5, 2005, at 1:23 PM, Guido van Rossum wrote:
The issue of mixing super() and explicit calls to the superclass's method occur with any method. (Thus making it difficult/impossible for a framework to convert to using super without breaking client code that subclasses).
Well, client classes which are leaves of the class tree can still safely use BaseClass.thisMethod(self, args) -- it's only classes that are written to be extended that must all be converted to using super(). So I'm not sure how you think your clients are breaking.
See the section "Subclasses must use super if their superclasses do". This is particularly a big issue with __init__.
Adding optional arguments to one branch of the inheritance tree, but not another, or adding different optional args in both branches. (breaks unless you always pass optional args as keywordargs, and all methods take **kwargs and pass that on to super).
But that breaks anyway; I don't see how using the old Base.method(self, args) approach makes this easier, *unless* you are using single inheritance. If you're expecting single inheritance anyway, why bother with super()?
There is a distinction between simple multiple inheritance, which did work in the old system vs. multiple inheritance in a diamond structure which did not work in the old system. However, consider something like the following (ignore the Interface/implements bit if you want. It's just to point out a common situation where two classes can independently implement the same method without having a common superclass): class IFrob(Interface): def frob(): """Frob the knob""" class A: implements(IFrob) def frob(self, foo=False): print "A.frob(foo=%r)"%foo class B: implements(IFrob) def frob(self, bar=False): print "B.frob(bar=%r)"%bar class C(A,B): def m(self, foo=False, bar=False): A.m(self, foo=foo) B.m(self, bar=bar) print "C.frob(foo=%r, bar=%r)"%(foo,bar) Now, how do you write that to use super? Here's what I come up with: class IFrob(Interface): def frob(): """Frob the knob""" class A(object): implements(IFrob) def frob(self, foo=False, *args, **kwargs): try: f = super(A, self).frob except AttributeError: pass else: f(foo=foo, *args, **kwargs) print "A.frob(foo=%r)"%foo class B(object): implements(IFrob) def frob(self, bar=False, *args, **kwargs): try: f = super(B, self).frob except AttributeError: pass else: f(bar=bar, *args, **kwargs) print "B.frob(bar=%r)"%bar class C(A,B): def frob(self, foo=False, bar=False, *args, **kwargs): super(C, self).frob(foo, bar, *args, **kwargs) print "C.frob(foo=%r, bar=%r)"%(foo,bar)
And using multiple inheritance the old was was not confusing? Surely you are joking.
It was pretty simple until you start having diamond structures. Then it's complicated. Now, don't get me wrong, I think that MRO-calculating mechanism really is "the right thing", in the abstract. I just think the way it works out as implemented in python is really confusing and it's easy to be worse off with it than without it.
If they're happy with single inheritance, let them use super() incorrectly. It works, and that's what count. Their code didn't work right with multiple inheritance before, it still doesn't. Some people just are uncomfortable with calling Base.method(self, ...) and feel super is "more correct". Let them.
Their code worked right in M-I without diamonds before. Now it likely doesn't work in M-I at all. James

On Wed, 5 Jan 2005 18:00:38 -0500, James Y Knight <foom@fuhm.net> wrote:
On Jan 5, 2005, at 1:23 PM, Guido van Rossum wrote:
The issue of mixing super() and explicit calls to the superclass's method occur with any method. (Thus making it difficult/impossible for a framework to convert to using super without breaking client code that subclasses).
Well, client classes which are leaves of the class tree can still safely use BaseClass.thisMethod(self, args) -- it's only classes that are written to be extended that must all be converted to using super(). So I'm not sure how you think your clients are breaking.
See the section "Subclasses must use super if their superclasses do". This is particularly a big issue with __init__.
I see. I was thinking about subclassing a single class, you are talking about subclassing multiple bases. Subclassing two or more classes is *always* very subtle. Before 2.2 and super(), the only sane way to do that was to have all except one base class be written as a mix-in class for a specific base class (or family of base classes). The idea of calling both __init__ methods doesn't work if there's a diamond; if there *is* a diamond (or could be one), using super() is the only sane solution.
Adding optional arguments to one branch of the inheritance tree, but not another, or adding different optional args in both branches. (breaks unless you always pass optional args as keywordargs, and all methods take **kwargs and pass that on to super).
But that breaks anyway; I don't see how using the old Base.method(self, args) approach makes this easier, *unless* you are using single inheritance. If you're expecting single inheritance anyway, why bother with super()?
There is a distinction between simple multiple inheritance, which did work in the old system
Barely; see above.
vs. multiple inheritance in a diamond structure which did not work in the old system. However, consider something like the following (ignore the Interface/implements bit if you want. It's just to point out a common situation where two classes can independently implement the same method without having a common superclass):
class IFrob(Interface): def frob(): """Frob the knob"""
class A: implements(IFrob) def frob(self, foo=False): print "A.frob(foo=%r)"%foo
class B: implements(IFrob) def frob(self, bar=False): print "B.frob(bar=%r)"%bar
class C(A,B): def m(self, foo=False, bar=False): [I presume you meant from instead of m here] A.m(self, foo=foo) B.m(self, bar=bar) print "C.frob(foo=%r, bar=%r)"%(foo,bar)
Now, how do you write that to use super?
The problem isn't in super(), the problem is that the classes A and B aren't written cooperatively, so attempting to combine them using multiple inheritance is asking for trouble. You'd be better off making C a container class that has separate A and B instances.
And using multiple inheritance the old was was not confusing? Surely you are joking.
It was pretty simple until you start having diamond structures. Then it's complicated. Now, don't get me wrong, I think that MRO-calculating mechanism really is "the right thing", in the abstract. I just think the way it works out as implemented in python is really confusing and it's easy to be worse off with it than without it.
So then don't use it. You couldn't have diamonds at all before 2.2. With *care* and *understanding* you can do the right thing in 2.2 and beyond. I'm getting tired of super() being blamed for the problems inherent to cooperative multiple inheritance. super() is the tool that you need to solve a hairy problem; but don't blame super() for the problem's hairiness.
If they're happy with single inheritance, let them use super() incorrectly. It works, and that's what count. Their code didn't work right with multiple inheritance before, it still doesn't. Some people just are uncomfortable with calling Base.method(self, ...) and feel super is "more correct". Let them.
Their code worked right in M-I without diamonds before. Now it likely doesn't work in M-I at all.
If you have a framework with classes written using the old paradigm that a subclass must call the __init__ (or frob) method of each of its superclasses, you can't change your framework to use super() instead while maintaining backwards compatibility. If you didn't realize that before you made the change and then got bitten by it, tough. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On Jan 5, 2005, at 6:36 PM, Guido van Rossum wrote:
The idea of calling both __init__ methods doesn't work if there's a diamond; if there *is* a diamond (or could be one), using super() is the only sane solution.
Very true.
So then don't use it. You couldn't have diamonds at all before 2.2. With *care* and *understanding* you can do the right thing in 2.2 and beyond.
I'm getting tired of super() being blamed for the problems inherent to cooperative multiple inheritance. super() is the tool that you need to solve a hairy problem; but don't blame super() for the problem's hairiness.
Please notice that I'm talking about concrete, real issues, not just a "super is bad!" rant. These are initially non-obvious (to me, at least) things that will actually happen in real code and that you actually do need to watch out for if you use super. Yes. It is a hard problem. However, the issues I talk about are not issues with the functionality and theory of calling the next method in an MRO, they are issues with the combination of MROs, the implementation of MRO-calling in python (via "super"), and current practices in writing python code. They are not inherent in cooperative multiple inheritance, but occur mostly because of its late addition to python, and the cumbersome way in which you have to invoke super. I wrote up the page as part of an investigation into converting Twisted to use super. I thought it would be a good idea to do the conversion, but others told me it would be a bad idea for backwards compatibility reasons. I did not believe, at first, and conducted experiments. In the end, I concluded that it is not possible, because of the issues with mixing the new and old paradigm.
If you have a framework with classes written using the old paradigm that a subclass must call the __init__ (or frob) method of each of its superclasses, you can't change your framework to use super() instead while maintaining backwards compatibility.
Yep, that's what I said, too.
If you didn't realize that before you made the change and then got bitten by it, tough.
Luckily, I didn't get bitten by it because I figured out the consequences and wrote a webpage about them before making an incorrect code change. Leaving behind the backwards compatibility issues... In order to make super really nice, it should be easier to use right. Again, the two major issues that cause problems are: 1) having to declare every method with *args, **kwargs, and having to pass those and all the arguments you take explicitly to super, and 2) that traditionally __init__ is called with positional arguments. To fix #1, it would be really nice if you could write code something like the following snippet. Notice especially here that the 'bar' argument gets passed through C.__init__ and A.__init__, into D.__init__, without the previous two having to do anything about it. However, if you ask me to detail how this could *possibly* *ever* work in python, I have no idea. Probably the answer is that it can't. class A(object): def __init__(self): print "A" next_method class B(object): def __init__(self): print "B" next_method class C(A): def __init__(self, foo): print "C","foo=",foo next_method self.foo=foo class D(B): def __init__(self, bar): print "D", "bar=",bar next_method self.bar=bar class E(C,D): def __init__(self, foo, bar): print "E" next_method class E2(C,D): """Even worse, not defining __init__ should work right too.""" E(foo=10, bar=20) E2(foo=10, bar=20) # Yet, these ought to result in a TypeError because the quaz keyword isn't recognized by # any __init__ method on any class in the hierarchy above E/E2: E(foo=10, bar=20, quaz=5) E2(foo=10, bar=20, quaz=5) James

Please notice that I'm talking about concrete, real issues, not just a "super is bad!" rant.
Then why is the title "Python's Super Considered Harmful" ??? Here's my final offer. Change the title to something like "Multiple Inheritance Pitfalls in Python" and nobody will get hurt.
They are not inherent in cooperative multiple inheritance, but occur mostly because of its late addition to python,
Would you rather not have seen it (== cooperative inheritance) added at all?
and the cumbersome way in which you have to invoke super.
Given Python's dynamic nature I couldn't think of a way to make it less cumbersome. I see you tried (see below) and couldn't either. At this point I tend to say "put up or shut up."
I wrote up the page as part of an investigation into converting Twisted to use super. I thought it would be a good idea to do the conversion, but others told me it would be a bad idea for backwards compatibility reasons. I did not believe, at first, and conducted experiments. In the end, I concluded that it is not possible, because of the issues with mixing the new and old paradigm.
So it has nothing to do with the new paradigm, just with backwards compatibility. I appreciate those issues (more than you'll ever know) but I don't see why you should try to discourage others from using the new paradigm, which is what your article appears to do.
Leaving behind the backwards compatibility issues...
In order to make super really nice, it should be easier to use right. Again, the two major issues that cause problems are: 1) having to declare every method with *args, **kwargs, and having to pass those and all the arguments you take explicitly to super,
That's only an issue with __init__ or with code written without cooperative MI in mind. When using cooperative MI, you shouldn't redefine method signatures, and all is well. and 2) that
traditionally __init__ is called with positional arguments.
Cooperative MI doesn't have a really good solution for __init__. Defining and calling __init__ only with keyword arguments is a good solution. But griping about "traditionally" is a backwards compatibility issue, which you said you were leaving behind.
To fix #1, it would be really nice if you could write code something like the following snippet. Notice especially here that the 'bar' argument gets passed through C.__init__ and A.__init__, into D.__init__, without the previous two having to do anything about it. However, if you ask me to detail how this could *possibly* *ever* work in python, I have no idea. Probably the answer is that it can't.
Exactly. What is your next_method statement supposed to do? No need to reply except when you've changed the article. I'm tired of the allegations. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Then why is the title "Python's Super Considered Harmful" ???
Here's my final offer. Change the title to something like "Multiple Inheritance Pitfalls in Python" and nobody will get hurt.
Or better yet, considering the recent thread on Python marketing, "Multiple Inheritance Mastery in Python" :-). Bill

On Jan 6, 2005, at 12:13 PM, Guido van Rossum wrote:
So it has nothing to do with the new paradigm, just with backwards compatibility. I appreciate those issues (more than you'll ever know) but I don't see why you should try to discourage others from using the new paradigm, which is what your article appears to do.
This is where I'm coming from: In my own code, it is very rare to have diamond inheritance structures. And if there are, even more rare that both sides need to cooperatively override a method. Given that, super has no necessary advantage. And it has disadvantages. - Backwards compatibility issues - Going along with that, inadvertent mixing of paradigms (you have to remember which classes you use super with and which you don't or your code might have hard-to-find errors). - Take your choice of: a) inability to add optional arguments to your methods, or b) having to use *args, **kwargs on every method and call super with those. - Having to try/catch AttributeErrors from super if you use interfaces instead of a base class to define the methods in use. So, I am indeed attempting to discourage people from using it, despite its importance. And also trying to educate people as to what they need to do if they have a case where it is necessary to use or if they just decide I'm full of crap and want to use it anyways.
In order to make super really nice, it should be easier to use right. Again, the two major issues that cause problems are: 1) having to declare every method with *args, **kwargs, and having to pass those and all the arguments you take explicitly to super,
That's only an issue with __init__ or with code written without cooperative MI in mind. When using cooperative MI, you shouldn't redefine method signatures, and all is well.
I have two issues with that statement. Firstly, it's often quite useful to be able to add optional arguments to methods. Secondly, that's not a property of cooperative MI, but one of cooperative MI in python. As a counterpoint, with Dylan, you can add optional keyword arguments to a method as long as the generic was defined with the notation #key (specifying that it will accept keyword arguments at all). This is of course even true in a single inheritance situation like in the example below. Now please don't misunderstand me, here. I'm not at all trying to say that Python sucks because it's not Dylan. I don't even particularly like Dylan, but it does have a number of good ideas. Additionally, Python and Dylan differ in fundamental ways: Python has classes and inheritance, Dylan has generic functions/multimethods. Dylan is (I believe) generally whole-program-at-a-time compiled/optimized, Python is not. So, I think a solution for python would have to be fundamentally different as well. But anyways, an example of what I'm talking about: define generic g (arg1 :: <number>, #key); define method g (arg1 :: <number>, #key) format-out("number.g\n"); end method g; define method g (arg1 :: <rational>, #key base :: <integer> = 10) next-method(); format-out("rational.g %d\n", base); end method g; define method g (arg1 :: <integer>, #key) next-method(); format-out("integer.g\n"); end method g; // Prints: // number.g // rational.g 1 // integer.g g(1, base: 1); // Produces: Error: Unrecognized keyword (base) as the second argument in call of g g(1.0, base: 1);
Cooperative MI doesn't have a really good solution for __init__. Defining and calling __init__ only with keyword arguments is a good solution. But griping about "traditionally" is a backwards compatibility issue, which you said you were leaving behind.
Well, kind of. In my mind, it was a different kind of issue, as it isn't solved by everyone moving over to using super. As nearly all the code that currently uses super does so without using keyword arguments for __init__, I considered it not so much backwards compatibility as a re-educating users kind of issue, the same as the requirement for passing along all your arguments.
Exactly. What is your next_method statement supposed to do?
Well that's easy. It's supposed to call the next function in the MRO with _all_ the arguments passed along, even the ones that the current function didn't explicitly ask for. I was afraid you might ask a hard question, like: if E2 inherits C's __init__, how the heck is it supposed to manage to take two arguments nonetheless. That one I *really* don't have an answer for.
No need to reply except when you've changed the article. I'm tired of the allegations.
Sigh. James

At 02:46 AM 1/6/05 -0500, James Y Knight wrote:
To fix #1, it would be really nice if you could write code something like the following snippet. Notice especially here that the 'bar' argument gets passed through C.__init__ and A.__init__, into D.__init__, without the previous two having to do anything about it. However, if you ask me to detail how this could *possibly* *ever* work in python, I have no idea. Probably the answer is that it can't.
class A(object): def __init__(self): print "A" next_method
class B(object): def __init__(self): print "B" next_method
Not efficiently, no, but it's *possible*. Just write a 'next_method()' routine that walks the frame stack and self's MRO, looking for a match. You know the method name from f_code.co_name, and you can check each class' __dict__ until you find a function or classmethod object whose code is f_code. If not, move up to the next frame and try again. Once you know the class that the function comes from, you can figure out the "next" method, and pull its args from the calling frame's args, walking backward to other calls on the same object, until you find all the args you need. Oh, and don't forget to make sure that you're inspecting frames that have the same 'self' object. Of course, the result would be a hideous evil ugly hack that should never see the light of day, but you could *do* it, if you *really really* wanted to. And if you wrote it in C, it might be only 50 or 100 times slower than super(). :)

"James Y Knight" <foom@fuhm.net> wrote in message news:091248B6-5FB7-11D9-8D68-000A95A50FB2@fuhm.net...
Please notice that I'm talking about concrete, real issues, not just a "super is bad!" rant.
Umm, James, come on. Let's be really real and concrete ;-). Your title "Python's Super Considered Harmful" is an obvious reference to and takeoff on Dijkstra's influential polemic "Goto Considered Harmful". To me, the obvious message therefore is that super(), like goto, is an ill-conceived monstrosity that warps peoples' minds and should be banished. I can also see a slight dig at Guido for introducing such a thing decades after Dijkstra taught us to know better. If that is your summary message for me, fine. If not, try something else. The title of a piece is part of its message -- especially when it has an intelligible meaning. For people who read the title in, for instance, a clp post (as I did), but don't follow the link and read what is behind the title (which I did do), the title *is* the message. Terry J. Reedy

On 2005 Jan 06, at 20:16, Terry Reedy wrote:
"James Y Knight" <foom@fuhm.net> wrote in message news:091248B6-5FB7-11D9-8D68-000A95A50FB2@fuhm.net...
Please notice that I'm talking about concrete, real issues, not just a "super is bad!" rant.
Umm, James, come on. Let's be really real and concrete ;-).
Your title "Python's Super Considered Harmful" is an obvious reference to and takeoff on Dijkstra's influential polemic "Goto Considered Harmful".
...or any other of the 345,000 google hits on "considered harmful"...?-) <http://www.meyerweb.com/eric/comment/chech.html> Alex

"Alex Martelli" <aleax@aleax.it> wrote in message news:10835172-6024-11D9-ADA4-000A95EFAE9E@aleax.it...
On 2005 Jan 06, at 20:16, Terry Reedy wrote:
[Knight's] title "Python's Super Considered Harmful" is an obvious reference to and takeoff on Dijkstra's influential polemic "Go To Statement Considered Harmful". http://www.acm.org/classics/oct95/
[title corrected from original posting and link added]
...or any other of the 345,000 google hits on "considered harmful"...?-)
Restricting the search space to 'Titles of computer science articles' would reduce the number of hits considerably. Many things have been considered harmful at sometime in almost every field of human endeavor. However, according to Eric Meyer, "Considered Harmful" Essays Considered Harmful
even that restriction would lead to thousands of hits inspired directly or indirectly by Niklaus Wirth's title for Dijkstra's Letter to the Editor. Thanks for the link. Terry J. Reedy

[Tim Peters]
... Unbound methods are used most often (IME) to call a base-class method from a subclass, like my_base.the_method(self, ...). It's especially easy to forget to write `self, ` there, and the exception msg then is quite focused because of that extra bit of type checking. Otherwise I expect we'd see a more-mysterious AttributeError or TypeError when the base method got around to trying to do something with the bogus `self` passed to it.
[Josiah Carlson]
Agreed.
Well, it's not that easy to agree with. Guido replied that most such cases would raise an argument-count-mismatch exception instead. I expect that's because he stopped working on Zope code, so actually thinks it's odd again to see a gazillion methods like: class Registerer(my_base): def register(*args, **kws): my_base.register(*args, **kws) I bet he even presumes that if you chase such chains long enough, you'll eventually find a register() method *somewhere* that actually uses its arguments <wink>.
While it seems that super() is the 'modern pradigm' for this, I have been using base.method(self, ...) for years now, and have been quite happy with it. After attempting to convert my code to use the super() paradigm, and having difficulty, I discovered James Knight's "Python's Super Considered Harmful" (available at http://www.ai.mit.edu/people/jknight/super-harmful/ ), wherein I discovered how super really worked (I should have read the documention in the first place), and reverted my changes to the base.method version.
How did super() get into this discussion? I don't think I've ever used it myself, but I avoid fancy inheritance graphs in "my own" code, so can live with anything.
I could live with it too, but I would probably use an equivalent of the following (with actual type checking):
def mysuper(typ, obj): lm = list(o.__class__.__mro__) indx = lm.index(typ) if indx == 0: return obj return super(lm[indx-1], obj)
All in all, I'm -0. I don't desire to replace all of my base.method with mysuper(base, obj).method, but if I must sacrifice convenience for the sake of making Python 2.5's implementation simpler, I guess I'll deal with it. My familiarity with grep's regular expressions leaves something to be desired, so I don't know how often base.method(self,...) is or is not used in the standard library.
I think there may be a misunderstanding here. Guido isn't proposing that base.method(self, ...) would stop working -- it would still work fine. The result of base.method would still be a callable object: it would no longer be of an "unbound method" type (it would just be a function), and wouldn't do special checking on the first argument passed to it anymore, but base.method(self, ...) would still invoke the base class method. You wouldn't need to rewrite anything (unless you're doing heavy-magic introspection, picking callables apart).

Tim Peters <tim.peters@gmail.com> wrote:
[Tim Peters]
... Unbound methods are used most often (IME) to call a base-class method from a subclass, like my_base.the_method(self, ...). It's especially easy to forget to write `self, ` there, and the exception msg then is quite focused because of that extra bit of type checking. Otherwise I expect we'd see a more-mysterious AttributeError or TypeError when the base method got around to trying to do something with the bogus `self` passed to it.
[Josiah Carlson]
Agreed.
Well, it's not that easy to agree with. Guido replied that most such cases would raise an argument-count-mismatch exception instead. I expect that's because he stopped working on Zope code, so actually thinks it's odd again to see a gazillion methods like:
class Registerer(my_base): def register(*args, **kws): my_base.register(*args, **kws)
I bet he even presumes that if you chase such chains long enough, you'll eventually find a register() method *somewhere* that actually uses its arguments <wink>.
If type checking is important, one can always add it using decorators. Then again, I would be willing to wager that most people wouldn't add it due to laziness, until it bites them for more than a few hours worth of debugging time.
While it seems that super() is the 'modern pradigm' for this, I have been using base.method(self, ...) for years now, and have been quite happy with it. After attempting to convert my code to use the super() paradigm, and having difficulty, I discovered James Knight's "Python's Super Considered Harmful" (available at http://www.ai.mit.edu/people/jknight/super-harmful/ ), wherein I discovered how super really worked (I should have read the documention in the first place), and reverted my changes to the base.method version.
How did super() get into this discussion? I don't think I've ever used it myself, but I avoid fancy inheritance graphs in "my own" code, so can live with anything.
It was my misunderstanding of your statement in regards to base.method. I had thought that base.method(self, ...) would stop working, and attempted to discover how one would be able to get the equivalent back, regardless of the inheritance graph.
I could live with it too, but I would probably use an equivalent of the following (with actual type checking):
def mysuper(typ, obj): lm = list(o.__class__.__mro__) indx = lm.index(typ) if indx == 0: return obj return super(lm[indx-1], obj)
All in all, I'm -0. I don't desire to replace all of my base.method with mysuper(base, obj).method, but if I must sacrifice convenience for the sake of making Python 2.5's implementation simpler, I guess I'll deal with it. My familiarity with grep's regular expressions leaves something to be desired, so I don't know how often base.method(self,...) is or is not used in the standard library.
I think there may be a misunderstanding here. Guido isn't proposing that base.method(self, ...) would stop working -- it would still work fine. The result of base.method would still be a callable object: it would no longer be of an "unbound method" type (it would just be a function), and wouldn't do special checking on the first argument passed to it anymore, but base.method(self, ...) would still invoke the base class method. You wouldn't need to rewrite anything (unless you're doing heavy-magic introspection, picking callables apart).
Indeed, there was a misunderstanding on my part. I misunderstood your discussion of base.method(self, ...) to mean that such things would stop working. My apologies. - Josiah

Tim Peters wrote:
I expect that's because he stopped working on Zope code, so actually thinks it's odd again to see a gazillion methods like:
class Registerer(my_base): def register(*args, **kws): my_base.register(*args, **kws)
I second that! My PyGUI code is *full* of __init__ methods like that, because of my convention for supplying initial values of properties as keyword arguments. -- Greg

On Jan 4, 2005, at 8:18 PM, Josiah Carlson wrote:
Tim Peters <tim.peters@gmail.com> wrote:
Let's get rid of unbound methods. When class C defines a method [snip] Really? Unbound methods are used most often (IME) to call a
Guido wrote: base-class method from a subclass, like my_base.the_method(self, ...). It's especially easy to forget to write `self, ` there, and the exception msg then is quite focused because of that extra bit of type checking. Otherwise I expect we'd see a more-mysterious AttributeError or TypeError when the base method got around to trying to do something with the bogus `self` passed to it.
Agreed. While it seems that super() is the 'modern paradigm' for this, I have been using base.method(self, ...) for years now, and have been quite happy with it. After attempting to convert my code to use the super() paradigm, and having difficulty, I discovered James Knight's "Python's Super Considered Harmful" (available at http://www.ai.mit.edu/people/jknight/super-harmful/ ), wherein I discovered how super really worked (I should have read the documention in the first place), and reverted my changes to the base.method version.
How does removing the difference between unmount methods and base.method(self, ...) break anything at all if it was correct code in the first place? As far as I can tell, all it does is remove any restriction on what "self" is allowed to be. On another note - I don't agree with the "super considered harmful" rant at all. Yes, when you're using __init__ and __new__ of varying signatures in a complex class hierarchy, initialization is going to be one hell of a problem -- no matter which syntax you use. All super is doing is taking the responsibility of calculating the MRO away from you, and it works awfully well for the general case where a method of a given name has the same signature and the class hierarchies are not insane. If you have a class hierarchy where this is a problem, it's probably pretty fragile to begin with, and you should think about making it simpler. -bob

On Tue, 2005-01-04 at 22:12 -0500, Bob Ippolito wrote:
If you have a class hierarchy where this is a problem, it's probably pretty fragile to begin with, and you should think about making it simpler.
I agree with James's rant almost entirely, but I like super() anyway. I think it is an indication not of a new weakness of super(), but of a long-standing weakness of __init__. One approach I have taken in order to avoid copiously over-documenting every super() using class is to decouple different phases of initialization by making __init__ as simple as possible (setting a few attributes, resisting the temptation to calculate things), and then providing class methods like '.fromString' or '.forUnserialize' that create instances that have been completely constructed for a particular purpose. That way the signatures are much more likely to line up across inheritance hierarchies. Perhaps this should be a suggested "best practice" when using super() as well?

On Wed, 2005-01-05 at 10:37, Glyph Lefkowitz wrote:
One approach I have taken in order to avoid copiously over-documenting every super() using class is to decouple different phases of initialization by making __init__ as simple as possible (setting a few attributes, resisting the temptation to calculate things), and then providing class methods like '.fromString' or '.forUnserialize' that create instances that have been completely constructed for a particular purpose. That way the signatures are much more likely to line up across inheritance hierarchies. Perhaps this should be a suggested "best practice" when using super() as well?
Yep, I've done the same thing. It's definitely a good practice. -Barry

Josiah Carlson wrote:
While it seems that super() is the 'modern paradigm' for this, I have been using base.method(self, ...) for years now, and have been quite happy with it.
I too would be very disappointed if base.method(self, ...) became somehow deprecated. Cooperative super calls are a different beast altogether and have different use cases. In fact I'm having difficulty finding *any* use cases at all for super() in my code. I thought I had found one once, but on further reflection I changed my mind. And I have found that the type checking of self provided by unbound methods has caught a few bugs that would probably have produced more mysterious symptoms otherwise. But I can't say for sure whether they would have been greatly more mysterious -- perhaps not. -- Greg

On Tue, 4 Jan 2005 10:28:03 -0800, Guido van Rossum <gvanrossum@gmail.com> wrote:
In my blog I wrote:
Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code) and sometimes you have to work around it; it complicates function attribute access; and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing.
This would make pickling (or any serialization mechanism) of `Class.method' based on name next to impossible. Right now, with the appropriate support, this works: >>> import pickle >>> class Foo: ... def bar(self): pass ... >>> pickle.loads(pickle.dumps(Foo.bar)) <unbound method Foo.bar> >>> I don't see how it could if Foo.bar were just a function object. Jp

On Tue, 04 Jan 2005 20:02:06 GMT, Jp Calderone <exarkun@divmod.com> wrote:
On Tue, 4 Jan 2005 10:28:03 -0800, Guido van Rossum <gvanrossum@gmail.com> wrote:
In my blog I wrote:
Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code) and sometimes you have to work around it; it complicates function attribute access; and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing.
This would make pickling (or any serialization mechanism) of `Class.method' based on name next to impossible. Right now, with the appropriate support, this works:
It occurs to me that perhaps I was not clear enough here. What I mean is that it is possible to serialize unbound methods currently, because they refer to both their own name, the name of their class object, and thus indirectly to the module in which they are defined. If looking up a method on a class object instead returns a function, then the class is no longer knowable, and most likely the function will not have a unique name which can be used to allow a reference to it to be serialized. In particular, I don't see how one will be able to write something equivalent to this: import new, copy_reg, types def pickleMethod(method): return unpickleMethod, (method.im_func.__name__, method.im_self, method.im_class) def unpickleMethod(im_name, im_self, im_class): unbound = getattr(im_class, im_name) if im_self is None: return unbound return new.instancemethod(unbound.im_func, im_self, im_class) copy_reg.pickle(types.MethodType, pickleMethod, unpickleMethod) But perhaps I am just overlooking the obvious. Jp

[me]
Actually, unbound builtin methods are a different type than bound builtin methods:
[Jim]
Of course, but conceptually they are similar. You would still encounter the concept if you got an unbound builtin method.
Well, these are all just implementation details. They really are all just callables. [Jp]
This would make pickling (or any serialization mechanism) of `Class.method' based on name next to impossible. Right now, with the appropriate support, this works:
>>> import pickle >>> class Foo: ... def bar(self): pass ... >>> pickle.loads(pickle.dumps(Foo.bar)) <unbound method Foo.bar> >>>
I don't see how it could if Foo.bar were just a function object.
Is this a purely theoretical objection or are you actually aware of anyone doing this? Anyway, that approach is pretty limited -- how would you do it for static and class methods, or methods wrapped by other decorators? -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On Tue, 4 Jan 2005 12:18:15 -0800, Guido van Rossum <gvanrossum@gmail.com> wrote:
[me]
Actually, unbound builtin methods are a different type than bound builtin methods:
[Jim]
Of course, but conceptually they are similar. You would still encounter the concept if you got an unbound builtin method.
Well, these are all just implementation details. They really are all just callables.
[Jp]
This would make pickling (or any serialization mechanism) of `Class.method' based on name next to impossible. Right now, with the appropriate support, this works:
>>> import pickle >>> class Foo: ... def bar(self): pass ... >>> pickle.loads(pickle.dumps(Foo.bar)) <unbound method Foo.bar> >>>
I don't see how it could if Foo.bar were just a function object.
Is this a purely theoretical objection or are you actually aware of anyone doing this? Anyway, that approach is pretty limited -- how would you do it for static and class methods, or methods wrapped by other decorators?
It's not a feature I often depend on, however I have made use of it on occassion. Twisted's supports serializing unbound methods this way, primarily to enhance the useability of tap files (a feature whereby an application is configured by constructing a Python object graph and then pickled to a file to later be loaded and run). "Objection" may be too strong a word for my stance here, I just wanted to point out another potentially incompatible behavior change. I can't think of any software which I cam currently developing or maintaining which benefits from this feature, it just seems unfortunate to further complicate the already unpleasant business of serialization. Jp

On 4-jan-05, at 19:28, Guido van Rossum wrote:
The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code)
It caught bugs for me a couple of times. If I remember correctly I was calling methods of something that was supposed to be a mixin class but I forgot to actually list the mixin as a base. But I don't think that's a serious enough issue alone to keep the unbound method type. But I'm more worried about losing the other information in an unbound method, specifically im_class. I would guess that info is useful to class browsers and such, or are there other ways to get at that? -- Jack Jansen, <Jack.Jansen@cwi.nl>, http://www.cwi.nl/~jack If I can't dance I don't want to be part of your revolution -- Emma Goldman

On Jan 4, 2005, at 6:01 PM, Jack Jansen wrote:
On 4-jan-05, at 19:28, Guido van Rossum wrote:
The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code)
It caught bugs for me a couple of times. If I remember correctly I was calling methods of something that was supposed to be a mixin class but I forgot to actually list the mixin as a base. But I don't think that's a serious enough issue alone to keep the unbound method type.
But I'm more worried about losing the other information in an unbound method, specifically im_class. I would guess that info is useful to class browsers and such, or are there other ways to get at that?
For a class browser, presumably, you would start at the class and then find the methods. Starting from some class and walking the mro, you can inspect the dicts along the way and you'll find everything and know where it came from. -bob

On Tue, 2005-01-04 at 18:01, Jack Jansen wrote:
But I'm more worried about losing the other information in an unbound method, specifically im_class. I would guess that info is useful to class browsers and such, or are there other ways to get at that?
That would be my worry too. OTOH, we have function attributes now, so why couldn't we just stuff the class on the function's im_class attribute? Who'd be the wiser? (Could the same be done for im_self and im_func for backwards compatibility?) quack-quack-ly y'rs, -Barry

On 2005 Jan 05, at 04:42, Barry Warsaw wrote:
On Tue, 2005-01-04 at 18:01, Jack Jansen wrote:
But I'm more worried about losing the other information in an unbound method, specifically im_class. I would guess that info is useful to class browsers and such, or are there other ways to get at that?
That would be my worry too. OTOH, we have function attributes now, so why couldn't we just stuff the class on the function's im_class attribute? Who'd be the wiser? (Could the same be done for im_self and im_func for backwards compatibility?)
Hmmm, seems to me we'd need copies of the function object for this purpose: def f(*a): pass class C(object): pass class D(object): pass C.f = D.f = f If now we want C.f.im_class to differ from D.f.im_class then we need f to get copied implicitly when it's assigned to C.f (or, of course, when C.f is accessed... but THAT might be substantial overhead). OK, I guess, as long as we don't expect any further attribute setting on f to affect C.f or D.f (and I don't know of any real use case where that would be needed). Alex

On Wed, 2005-01-05 at 12:11 +0100, Alex Martelli wrote:
Hmmm, seems to me we'd need copies of the function object for this purpose:
For the stated use-case of serialization, only one copy would be necessary, and besides - even *I* don't use idioms as weird as the one you are suggesting very often ;). I think it would be reasonable to assign im_class only to functions defined in class scope. The only serialization that would break in that case is if your example had a 'del f' at the end.

On Wed, 2005-01-05 at 10:41, Glyph Lefkowitz wrote:
I think it would be reasonable to assign im_class only to functions defined in class scope. The only serialization that would break in that case is if your example had a 'del f' at the end.
+1. If you're doing something funkier, then you can set that attribute yourself. -Barry

At 12:29 PM 1/5/05 -0500, Barry Warsaw wrote:
On Wed, 2005-01-05 at 10:41, Glyph Lefkowitz wrote:
I think it would be reasonable to assign im_class only to functions defined in class scope. The only serialization that would break in that case is if your example had a 'del f' at the end.
+1. If you're doing something funkier, then you can set that attribute yourself.
-Barry
Um, isn't all this stuff going to be more complicated and spread out over more of the code than just leaving unbound methods in place?

Um, isn't all this stuff going to be more complicated and spread out over more of the code than just leaving unbound methods in place?
Well, in an early version of Python it was as simple as I'd like ot to be again: the instancemethod type was only used for bound methods (hence the name) and C.f would return same the function object as C.__dict__["f"]. Apart from backwards compatibility with all the code that has grown cruft to deal with the fact that C.f is not a function object, I still see no reason why the current state of affairs is better. -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Alex Martelli wrote:
def f(*a): pass class C(object): pass class D(object): pass C.f = D.f = f
If now we want C.f.im_class to differ from D.f.im_class then we need f to get copied implicitly when it's assigned to C.f (or, of course, when C.f is accessed... but THAT might be substantial overhead). OK, I guess, as long as we don't expect any further attribute setting on f to affect C.f or D.f (and I don't know of any real use case where that would be needed).
You'd have to do a copy anyway, because f() is still a module-level callable entity. I also agree with Glyph that im_class should only really be set in the case of methods defined within the class block. Also, interestingly, removing unbound methods makes another thing possible. class A(object): def foo(self): pass class B(object): foo = A.foo class C(object): pass C.foo = A.foo I'd really like to avoid making copies of functions for the sake of reload() and edit-and-continue functionality. Currently we can track down everything that has a reference to foo, and replace it with newfoo. With copies, this would more difficult. Thanks, -Shane

Hi Guido, On Tue, Jan 04, 2005 at 10:28:03AM -0800, Guido van Rossum wrote:
Let's get rid of unbound methods.
Is there any other use case for 'C.x' not returning the same as 'appropriate_super_class_of_C.__dict__["x"]' ? I guess it's too late now but it would have been nice if user-defined __get__() methods had the more obvious signature (self, instance) instead of (self, instance_or_None, cls=None). Given the amount of potential breakage people already pointed out I guess it is not reasonable to change that. Armin

At 04:30 PM 1/5/05 +0000, Armin Rigo wrote:
Hi Guido,
On Tue, Jan 04, 2005 at 10:28:03AM -0800, Guido van Rossum wrote:
Let's get rid of unbound methods.
Is there any other use case for 'C.x' not returning the same as 'appropriate_super_class_of_C.__dict__["x"]' ?
Er, classmethod would be one; a rather important one at that.
participants (21)
-
Aahz
-
Alex Martelli
-
Andrew Koenig
-
Armin Rigo
-
Barry Warsaw
-
Bill Janssen
-
Bob Ippolito
-
Glyph Lefkowitz
-
Greg Ewing
-
Guido van Rossum
-
Jack Diederich
-
Jack Jansen
-
James Y Knight
-
Jim Fulton
-
Josiah Carlson
-
Jp Calderone
-
Phillip J. Eby
-
Raymond Hettinger
-
Shane Holloway (IEEE)
-
Terry Reedy
-
Tim Peters