Hi all, I blogged about that topic today which turned out to be a very bad idea, so I summarize it for the mailinglist here to hopefully start a discussion about the topic, which I think is rather important. In the last weeks something remarkable happened in the Python3 sources: self kinda became implicit. Not in function definitions, but in super calls. But not only self: also the class passed to `super`. That's interesting because it means that the language shifts into a completely different direction. `super` was rarely used in the past, mainly because it was weird to use. In the most common use case the current class and the current instance where passed to it, and the super typed returned looked up the parent methods on the MRO for you. It was useful for multiple inheritance and mixin classes that don't know their parent but confusing for many. I can see that a replacement is a good idea, but I'm not so sure if the current implementation is the way to go. The main problem with replacing `super(Foo, self).bar()` with something like `super.bar()` is obviously that self is explicit and the class (in that case Foo) can't be determined by the caller. Furthermore the Python principle was always against functions doing stack introspection to find the caller. There are few examples in the stdlib or builtins that do some sort of caller introspection. Those are the special functions `vars`, `locals`, `gloabal`, `vars` and some functions in the inspect module. And all of them do nothing more than getting the current frame and accessing the dict of locals or globals. What super in current Python 3 builds does goes way beyond that. The implementation of the automatic super currently has two ugly details that I think violate the Python Zen: The bytecode generated is differently if the name "super" is used in the function. `__class__` is only added as cell to the code if `super` or `__class__` is referenced. That and the fact that `f_localsplus` is completely unavailable from within python makes the whole process appear magical. This is way more magical than anything we've had in Python in the past and just doesn't fit into the language in my opinion. We do have an explicit self in methods and methods are more or less just functions. Python's methods are functions, just that a descriptor puts a method object around it to pass the self as first arguments. That's an incredible cool thing to have and makes things very simple and non-magical. Breaking that principle by introducing an automatic super seems to harm the concept. Another odd thing is that Python 3 starts keeping information on the C layer we can't access from within Python. Super is one example, another good one are methods. They don't have a descriptor that wraps them if they are accessed via their classes. This as such is not a problem as you can call them the same (just that you can call them with completely different receivers now) but it becomes a problem if some of the functions are marked as staticmethods. Then they look completely the same when looking at them from a classes perspective: | >>> class C: | ... normal = lambda x: None | ... static = staticmethod(lambda x: None) | ... | >>> type(C.normal) is type(C.static) | True | >>> C.normal | <function <lambda> at 0x4da150> As far as I can see a documentation tool has no chance to keep them apart even though they are completely different on an instance: | >>> type(C().normal) is type(C().static) | False | >>> C().normal | <bound method C.<lambda> of <__main__.C object at 0x4dbcf0>> | >>> C().static | <function <lambda> at 0x4da198> While I don't knwo about the method thing, I think an automatic super should at least be implementable from within Python. I could imagine that by adding __class__ and __self__ to scopes automatically a lot of that magic could be removed. Regards, Armin