On 19 July 2016 at 14:04, Guido van Rossum
The same argument would apply against abstractmethod... I have definitely wondered when looking over some code whether it was defining or overriding a method. But requiring people to always decorate overrides seems hard, unless we made it a property that can be enforced by a metaclass, maybe?
This is trickier in Python than it is in Java, since we support multiple inheritance, and we don't require that subclasses be drop-in replacements for their parent class. The key difference I see between @abstractmethod and this proposal is that @abstractmethod is a service that base class authors provide to subclass authors to say "Hey, when subclassing this, you *need* to override these, or your subclass won't work properly". In effect, it's an implicit test case for all future subclass definitions. By contrast, this proposal would go in the other direction: it would be an assertion by the author of a *subclass* that that particular method isn't new, it's inherited from a parent class, and they're replacing the existing implementation with a new one. It would only add new information for readers in cases where the method wasn't already calling super() (since the latter already implies you expect there to be at least one further implementation along the MRO to call). In my view, that puts it more in the category of type annotations and the "missing attribute" detection in tools like pylint than it does abstractmethod: rather than checking it at runtime, you'd want static checkers like mypy to flag it as a structural error if a method declared as replacing a base class method wasn't actually doing so. So a "typing.override" to complement "typing.overload" would make sense to me, but a checked-at-runtime decorator wouldn't (doing it that way would also let you put the override annotation in stub files instead of directly in the code). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia