Fwd: Define a method or function attribute outside of a class with the dot operator

Keep in mind that the extra syntax is *very* minor, and goes hand-to-hand with the existing attribute access syntax. Basically it's taking the existing syntax to one more place, where it in my opinion should have been since long ago.
Yeah this is a major reason why I want this, and the reason I mentioned "unnatural order" in the original mail. Having the class's name in the beginning just makes it feel right..
I didn't even realize you would avoid the global namespace issue too, this makes me fall in love with the idea even more. I really think the added complexity isn't much. One thing to consider, however, is what would happen if someone attempted to use this in a class definition: class Foo: ... class Bar: def Foo.meth(self): ...

On Fri, Feb 10, 2017 at 9:45 PM, Markus Meskanen <markusmeskanen@gmail.com> wrote:
Every now and then, we get a proposal along these lines. I think it's about time a PEP got written. The usual way this is explained is that a function name can be anything you can assign to. Currently, a function has to have a simple name, and it then gets created with that as its __name__ and bound to that name in the current namespace (module, class, or function). To achieve what you're looking for, the syntax would be defined in terms of assignment, same as a 'for' loop's iteration variable is: # Perfectly legal for spam.ham in iter: pass # Not currently legal def ham.spam(): pass Markus, do you want to head this up? I'll help out with any editorial work you have trouble with (as a PEP editor, I can assign it a number and so on). Considerations: * What would the __name__ be? In "def ham.spam():", is the name "spam" or "ham.spam"? Or say you have "def x[0]():" - is the name "x[0]" or something else? * Are there any syntactic ambiguities? Any special restrictions? * Exactly what grammar token would be used? Currently NAME; might become 'test'? * Will there be any possible backward incompatibilities? Create a pull request against https://github.com/python/peps - looks like the next number is 542. Any questions, I'm happy to help. ChrisA

On Fri, Feb 10, 2017 at 10:05:30PM +1100, Chris Angelico wrote:
* What would the __name__ be? In "def ham.spam():", is the name "spam" or "ham.spam"?
"spam" of course, just like it is now: py> class Ham: ... def spam(self): ... ... ... py> py> Ham.spam.__name__ 'spam' You might be thinking of __qualname__: py> Ham.spam.__qualname__ 'Ham.spam'
Or say you have "def x[0]():" - is the name "x[0]" or something else?
I wouldn't allow that. I feel that "any assignment target at all" is an over-generalisation, a case of YAGNI. It is relatively easy to change our mind and add additional cases in the future, but very difficult to remove them if they turn out to be a mistake. My intuition tells me that we should allow : def name dot name (args): possibly even more than one dot: def name dot name dot name ... (args): but no additional cases: # syntax error def spam[0]function(): ... -- Steve

I am definetelly -1 to this idea. But since you are discussing this seriously, one nice thing is to recall how Javascript does that: `function <name> () ` is an expression that returns the created function, and thus can be assigned to anything on the left side. Of course, that would throw us back to a way of thinking of inline definition of multiline functions - which is another requested and unresolved thing in Python. (But we might require the `def` statement to still be aligned, at least style-wise, and require people to write Foo.foo =\ def (self, ...): ... ) That said, this possibility in Javascript is the source of severe inconsistencies in how functions are declared across different libraries and projects, and IMHO, makes reading (and writting) a real pain. (And, as stated above, a two line decorator could make for the patching - it does not need to have such an ugly name as "monkey_patch" - it could be just "assign" instead) js -><- On 10 February 2017 at 09:51, Steven D'Aprano <steve@pearwood.info> wrote:

On Fri, Feb 10, 2017 at 02:28:25PM +0200, Markus Meskanen wrote:
I've started working on a PEP for this since most people seem to be for it.
I don't know how you get "most people" -- there's only been a handful of responses in the few hours since the original post. And apart from one explicit -1, I read most of them as neutral, not in favour. Of course you are perfectly entitled to start work on a PEP at any time, but don't get your hopes up. I'm one of the neutral parties, perhaps just a tiny bit positive +0, but only for the original proposal. I am -1000 on allowing arbitrary assignment targets. I believe that the cost in readability far outweighs the usefulness of allowing things like: def mydict[key].attr[-1](arg): ... -- Steve

Yeah I worded that poorly, more like most people didn't turn me down which I was a bit afraid of.
Do not worry, I will not propose the advanced method, only dot notation! That being said, I don't think it's up to the language if someone wants to write ugly code like that, you can already do way uglier stuff with the existing features. I don't really see people doing this either: mydict[key].attr[-1].append(my_func) So why would they if we suddenly introduce this to functions? Anyways that's not a worry of this to-be PEP. - Markus

Hi all, I would like to add one more generic remark about syntax extensions, regarding something Markus said and which has bothered me before, also related to other syntax proposals. "Decorator approach is no different from doing `Foo.bar = bar` under the function definition I think, except it requires one to figure out what the decorator does first." My point would be that the new syntax *also* requires one to figure out what the new syntax does. And unfortunately, syntax is much less discoverable than decorators. For a decorator, I can do `help(decorator)' or search the python library reference or probably just mouse-hover over the name in my favourite editor/IDE. But if I don't understand the dot in `class foo.bar:', then what? It's probably somewhere buried in the language spec for `class' but realistically I am now going to blight Stackoverflow with my questions. Stephan 2017-02-10 13:13 GMT+01:00 Joao S. O. Bueno <jsbueno@python.org.br>:

On 10 February 2017 at 13:55, Stephan Houben <stephanh42@gmail.com> wrote:
My point would be that the new syntax *also* requires one to figure out what the new syntax does.
This is an extremely good point. It is mentioned when new syntax is proposed (the term often used is "discoverability") but the idea never seems to stick, as people keep forgetting to consider it when proposing new ideas. With this proposal the only thing you can search for is "def", and you're going to mostly find sites that explain the current syntax. So anyone looking for understanding of the new construct will likely end up even more confused after searching than they were before. Markus - if you do write up a PEP, please make sure this point is noted and addressed. Paul

I deeply believe the dot notation is very simple to understand (for the record, it's the default in JS ja Lua and they're not having any issues with it), and I can't think of a situation where someone knows Python well enough to understand decorators with arguments but wouldn't understand the dot notation. We already use the dot notation for normal attributes, so why not use it for attributes in function def? I think it'll be easier to StackOverflow the dot notation as opposed to argumented decorators. And what I meant by "they have to figure out what the decorator does first" is that it varies between every project. You absolutely cannot know for sure what the decorator does until you read through it, meaning you have to go look it up everytime. - Markus

Hi list, I'm quite neutral to this proposition, as it's not a use case I see often myself needing. On Fri, Feb 10, 2017 at 02:55:31PM +0100, Stephan Houben wrote: […]
but this is definitely not a reason to dismiss a proposal. A language is aimed at evolves and introduce new syntax features, and yes, stackoverflow will get questions about it, blog articles written and RTFW updated, so you'll get the info you'll need fastly. Cheers, -- Guyzmo

On 10 February 2017 at 10:45, Markus Meskanen <markusmeskanen@gmail.com> wrote:
In implementation terms, the syntax change is not as minor as you suggest. At the moment, the syntax for a "def" statement is: funcdef ::= [decorators] "def" funcname "(" [parameter_list] ")" ["->" expression] ":" suite funcname ::= identifier You're proposing replacing "identifier" as the definition of a "funcname" with... what? dotted_name might work, but that opens up the possibility of class Foo: pass foo = Foo def foo.a(self): pass (note I'm defining a method on the *instance*, not on the class). Do you want to allow that? What about "def a.b.c.d.e(): pass" (no self as argument, deeply nested instance attribute). Furthermore, once we open up this possibility, I would expect requests for things like func_table = {} func_table["foo"] = lambda a, b: a+b def func_table["bar"] (a,b): return a-b pretty quickly. How would you respond to those? (Setting up function tables is a much more common and reasonable need than monkeypatching classes). Your proposal is clear enough in terms of your intent, but the implementation details are non-trivial. Paul PS Personally, I'm slightly in favour of the idea in principle, but I don't think it's a useful enough addition to warrant having to deal with all the questions I note above.

On 10 February 2017 at 12:16, Chris Angelico <rosuav@gmail.com> wrote:
But what do __name__ and __qualname__ get set to? What happens if you do this at class scope, rather than at module level or inside another function? What happens to the zero-argument super() support at class scope? What happens if you attempt to use zero-argument super() when *not* at class scope? These are *answerable* questions (and injecting the right __class__ cell reference for zero-argument super() support is a compelling technical argument in favour of this feature over ordinary attribute binding operations), but there's a lot more to the proposal than just relaxing a syntactic restriction in the language grammar. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sat, Feb 11, 2017 at 1:16 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
... and are exactly why I asked the OP to write up a PEP. This isn't my proposal, so it's not up to me to make the decisions. For what it's worth, my answers would be: __name__ would be the textual representation of exactly what you typed between "def" and the open parenthesis. __qualname__ would be built the exact same way it currently is, based on that __name__. Zero-argument super() would behave exactly the way it would if you used a simple name. This just changes the assignment, not the creation of the function. So if you're inside a class, you could populate a lookup dictionary with method-like functions. Abuse this, and you're only shooting your own foot. Zero-argument super() outside of a class, just as currently, would be an error. (Whatever kind of error it currently is.) Maybe there are better answers to these questions, I don't know. That's what the PEP's for. ChrisA

On Sat, Feb 11, 2017 at 01:25:40AM +1100, Chris Angelico wrote:
If I'm reading this right, you want this behaviour: class Spam: pass def Spam.func(self): pass assert 'Spam.func' not in Spam.__dict__ assert 'func' in Spam.__dict__ assert Spam.func.__name__ == 'Spam.func' assert Spam.func.__qualname__ == 'Spam.Spam.func' If that's the case, I can only ask... what advantage do you see from this? Because I can see plenty of opportunity for confusion, and no advantage. For what its worth, Lua already has this feature: http://www.lua.org/pil/6.2.html Lib = {} function Lib.foo (x,y) return x + y end If we define that function foo inside the Lib table, and then cause an error, the Lua interpreter tells us the function name:
-- Steve

On Sat, Feb 11, 2017 at 2:25 AM, Steven D'Aprano <steve@pearwood.info> wrote:
I might be wrong about the __name__; that was a response that came from the massively extensive research of "hmm, I think this would be what I'd do". It seems the simplest way to cope with the many possibilities; having __name__ be "func" would work in the dot form, but not others. But that's bikeshedding. ChrisA

On 10 February 2017 at 16:25, Steven D'Aprano <steve@pearwood.info> wrote:
What I would personally hope to see from the proposal is that given: class Spam: pass def Spam.func(self): return __class__ the effective runtime behaviour would be semantically identical to: class Spam: def func(self): return __class__ such that: * __name__ is set based on the method name after the dot * __qualname__ is set based on the __name__ of the given class * __set_owner__ is called after any function decorators are applied * zero-argument super() and other __class__ references work properly from the injected method Potentially, RuntimeError could be raised if the reference before the dot is not to a type instance. If it *doesn't* do that, then I'd be -1 on the proposal, since it doesn't add enough expressiveness to the language to be worth the extra syntax. By contrast, if it *does* do it, then it makes class definitions more decomposable, by providing post-definition access to parts of the machinery that are currently only accessible during the process of defining the class. The use case would be to make it easier to inject descriptors when writing class decorators such that they behave essentially the same as they do when defined in the class body: def my_class_decorator(cls): def cls.injected_method(self): # Just write injected methods the same way you would in a class body return __class__ return cls (Actually doing this may require elevating super and __class__ to true keyword expressions, rather than the pseudo-keywords they are now) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

One thing that I don't think has been mentioned, but that brings me from a +0 to a more negative outlook, is the interaction between this proposal and some of python's existing class-related features, metaclasses and descriptors. That is currently we know that function definition, and even method definition, will not have side effects. This potentially changes that since def Foo.foo(self): ... could be a descriptor. Even if it doesn't, its possible that `Foo.foo` is actually resolved from `Foo._foo`, and so this potentially further confuses the naming considerations. Then we have metaclasses. Prior to this change, it would be fully the monkeypatcher's responsibility to do any metaclass level changes if they were necessary when monkeypatching. However, since we are potentially adding a first class support for certain monkeypatches, It raises a question about some first class way to handle monkeypatched methods. Do we need to provide some kind of method to a metaclass writer that allows them to handle methods that are patched on later? Or does the language still ignore it? --Josh On Fri, Feb 10, 2017 at 12:20 PM Nick Coghlan <ncoghlan@gmail.com> wrote:

If everything was contained right in the same file, this is sanctioning another way to do it (when there should only be one obvious way). If you have multiple modules/packages, horrors can evolve where a class method could be patched in an unknown location by any loaded module (or you could even introduce order-of-import sensitivities). For testing, this can be a necessary evil which is OK so long as the patch is limited/apparent, and some other very narrow cases (setuptools something something?). That said, I don't want their use condoned or eased for fear of proliferation of these "antiprogrammer land mines" that I might trip over in the future. On Fri, Feb 10, 2017 at 12:15 PM, Joshua Morton <joshua.morton13@gmail.com> wrote:

Please keep in mind that this idea was not created to improve monkey patching, it just happens to be one of the side effects due to classes being objects. The main use case is the ability to set an instance's callback function (see the Menu example), and to allow the class being referenced in the function's header; for example in a decorator and during typing. No additional "fancy" features are intended, it would simply replace this: foo = Bar() def f(): ... foo.f = f With syntax sugar, similar to how decorators replaced this: def f(): ... f = decorate(f) On Feb 10, 2017 20:50, "Nick Timkovich" <prometheus235@gmail.com> wrote:

On 02/10/2017 10:48 AM, Nick Timkovich wrote:
If everything was contained right in the same file, this is sanctioning another way to do it (when there should only be one obvious way).
No worries, this way is not obvious.
Folks can still do that nightmare right now. I'm -0.5 on it -- I don't think the payoff is worth the pain. But I'm +1 on writing a PEP -- collect all these pros and cons in one place to save on future discussion. (And (good) PEP writing is a way to earn valuable Python Points!) -- ~Ethan~

Has this REALLY not been discussed and rejected long ago?????
Exactly -- this is obvious enough that it WILL come up again, and I'm sure it has (but my memory gets fuzzy more than a few months back....) It would be great to document it even if it is headed for rejection. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Hi all, For what it's worth, I believe that the "class extension" scenario from Nick can be supported using plain ol' metaclasses. Not sure if this covers all desired capabilities, but at least the super() mechanism works correctly. Syntax is like this: class Foo(metaclass=class_extend(Foo)): ... See: https://gist.github.com/stephanh42/97b47506e5e416f97f5790c070be7878 Stephan 2017-02-10 19:48 GMT+01:00 Nick Timkovich <prometheus235@gmail.com>:

On Fri, Feb 10, 2017 at 9:20 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Yes, this is exactly what I would hope/expect to see. One use case for this functionality is defining classes with an extensive method-based API with a sane dependency graph. For example, consider writing a class like numpy.ndarray <https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html> or pandas.DataFrame <http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html> with dozens of methods. You could argue that using so many methods is an anti-pattern, but nonetheless it's pretty common and hard to avoid in some cases (e.g., for making number-like classes that support arithmetic and comparisons). For obvious reasons, the functionality for these classes does not all live in a single module. But the modules that define helper functions for most methods also depend on the base class, so many of them need to get imported inside method definitions <https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/core/frame.py#L1227> to avoid circular imports. The result is pretty ugly, and files defining the class still get gigantic. An important note is that ideally, we would still have way of indicating that Spam.func should exists in on the Spam class itself, even if it doesn't define the implementation. I suppose an abstractmethod overwritten by the later definition might do the trick, e.g., class Spam(metaclass=ABCMeta): @abstractmethod def func(self): pass def Spam.func(self): return __class__ And finally, it's quite possible that there's a clean metaclass based solution for extending Spam in another file, I just don't know it yet.

On 10Feb2017 1400, Stephan Hoyer wrote:
An abstractfunction should not become a concrete function on the abstract class - the right way to do this is to use a subclass. class SpamBase(metaclass=ABCMeta): @abstractmethod def func(self): pass class Spam(SpamBase): def func(self): return __class__ If you want to define parts of the class in separate modules, use mixins: from myarray.transforms import MyArrayTransformMixin from myarray.arithmetic import MyArrayArithmeticMixin from myarray.constructors import MyArrayConstructorsMixin class MyArray(MyArrayConstructorsMixin, MyArrayArithmeticMixin, MyArrayTransformMixin): pass The big different between these approaches and the proposal is that the proposal does not require both parties to agree on the approach. This is actually a terrible idea, as subclassing or mixing in a class that wasn't meant for it leads to all sorts of trouble unless the end user is very careful. Providing first-class syntax or methods for this discourages carefulness. (Another way of saying it is that directly overriding class members should feel a bit dirty because it *is* a bit dirty.) As Paul said in an earlier email, the best use of non-direct assignment in function definitions is putting it into a dispatch dictionary, and in this case making a decorator is likely cleaner than adding new syntax. But by all means, let's have a PEP. It will simplify the discussion when it comes up in six months again (or whenever the last time this came up was - less than a year, I'm sure). Cheers, Steve

Since votes seem to be being counted and used for debate purposes, I am -1 to anything that encourages or condones people adding functionality to classes outside of the class definition. (Monkeypatching in my mind neither condones or encourages, and most descriptions come with plenty of caveats about how it should be avoided.) My favourite description of object-oriented programming is that it's like "reading a road map through a drinking(/soda/pop) straw". We do not need to tell people that it's okay to make this problem worse by providing first-class tools to do it. Top-posted from my Windows Phone -----Original Message----- From: "Chris Angelico" <rosuav@gmail.com> Sent: 2/10/2017 8:27 To: "Python-Ideas" <python-ideas@python.org> Subject: Re: [Python-ideas] Fwd: Define a method or function attributeoutside of a class with the dot operator On Sat, Feb 11, 2017 at 1:16 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
... and are exactly why I asked the OP to write up a PEP. This isn't my proposal, so it's not up to me to make the decisions. For what it's worth, my answers would be: __name__ would be the textual representation of exactly what you typed between "def" and the open parenthesis. __qualname__ would be built the exact same way it currently is, based on that __name__. Zero-argument super() would behave exactly the way it would if you used a simple name. This just changes the assignment, not the creation of the function. So if you're inside a class, you could populate a lookup dictionary with method-like functions. Abuse this, and you're only shooting your own foot. Zero-argument super() outside of a class, just as currently, would be an error. (Whatever kind of error it currently is.) Maybe there are better answers to these questions, I don't know. That's what the PEP's for. ChrisA _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools? And this wouldn't only be for classes, but for setting instance attributes too (see the Menu example in original mail). - Markus On Fri, Feb 10, 2017 at 5:38 PM, Steve Dower <steve.dower@python.org> wrote:

On 10 February 2017 at 16:09, Markus Meskanen <markusmeskanen@gmail.com> wrote:
But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools?
Because encouraging and making it easier for people to make mistakes is the wrong thing to do, surely? Paul

Well yes, but I think you're a bit too fast on labeling it a mistake to use monkey patching... On Feb 10, 2017 18:15, "Paul Moore" <p.f.moore@gmail.com> wrote: On 10 February 2017 at 16:09, Markus Meskanen <markusmeskanen@gmail.com> wrote:
But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools?
Because encouraging and making it easier for people to make mistakes is the wrong thing to do, surely? Paul

Another point of view: Some call it monkeypatching. Others call it configuration. There's room for both views and I don't see anything wrong with configuration using this kind of feature. Sven On 10.02.2017 17:17, Markus Meskanen wrote:

When you apply the "what if everyone did this" rule, it looks like a bad idea (or alternatively, what if two people who weren't expecting anyone else to do this did it). Monkeypatching is fairly blatantly taking advantage of the object model in a way that is not "supported" and cannot behave well in the context of everyone doing it, whereas inheritance or mixins are safe. Making a dedicated syntax or decorator for patching is saying that we (the language) think you should do it. (The extension_method decorator sends exactly the wrong message about what it's doing.) Enabling a __class__ variable within the scope of the definition would also solve the motivating example, and is less likely to lead to code where you need to review multiple modules and determine whole-program import order to figure out why your calls do not work. Top-posted from my Windows Phone -----Original Message----- From: "Markus Meskanen" <markusmeskanen@gmail.com> Sent: 2/10/2017 10:18 To: "Paul Moore" <p.f.moore@gmail.com> Cc: "Python-Ideas" <python-ideas@python.org>; "Steve Dower" <steve.dower@python.org> Subject: Re: [Python-ideas] Fwd: Define a method or function attributeoutsideof a class with the dot operator Well yes, but I think you're a bit too fast on labeling it a mistake to use monkey patching... On Feb 10, 2017 18:15, "Paul Moore" <p.f.moore@gmail.com> wrote: On 10 February 2017 at 16:09, Markus Meskanen <markusmeskanen@gmail.com> wrote:
But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools?
Because encouraging and making it easier for people to make mistakes is the wrong thing to do, surely? Paul

On Fri, Feb 10, 2017 at 12:11:46PM -0600, Steve Dower wrote:
When you apply that rule, Python generally fails badly. In theory, Python is the worst possible language to be programming in, because the introspection capabilities are so powerful, the data hiding so feeble, the dynamicism of the language so great that almost anything written in pure-Python can be poked and prodded, bits deleted and new bits inserted. Python doesn't even have constants!!! import math math.pi = 3.0 # change the very geometry of spacetime And yet, in practice this is a problem more in theory than in practice. While you are right to raise this as a possible disadvantage of the proposal ("may ever-so-slightly encourage monkey-patching, by making it seem ever-so-slightly less mucky") I don't think you are right to weigh it as heavily as you appear to be doing. Python has had setattr() forever, and the great majority of Python programmers manage to avoid abusing it.
That's an extremely optimistic view of things. Guido has frequently eluded to the problems with inheritance (you can't just inherit from anything and expect your code to work), and he's hardly the only one that has pointed out that inheritance and OOP hasn't turned out to be the panacea that people hoped. As for mixins, Michele Simionato has written a series of blog posts about the dangers of mixins and multiple inheritance, and suggesting traits as a more restricted and safer alternative. Start here: http://www.artima.com/weblogs/viewpost.jsp?thread=246488
Making a dedicated syntax or decorator for patching is saying that we (the language) think you should do it.
We already have that syntax: anything.name = thing
(The extension_method decorator sends exactly the wrong message about what it's doing.)
Are you refering to a decorator something like this? @extend(TheClass) def method(self, arg): ... assert TheClass.method is method Arguments about what it should be called aside, what is the wrong message you see here?
Enabling a __class__ variable within the scope of the definition would also solve the motivating example,
Can you elaborate on that a bit more? Given the current idiom for injecting a method: class MyClass: ... # Later on... def method(self, arg): ... MyClass.method = method del method where does the __class__ variable fit into this? -- Steve

On 12 February 2017 at 04:37, Steven D'Aprano <steve@pearwood.info> wrote:
And the point here is that we don't need to extend def, because we already have that syntax. Adding new syntax for something that we can already do is generally accepted when the "thing we can already do" is deemed sufficiently important that it's worth making it a language feature in its own right. Decorators are a prime example of this - before the decorator syntax was added, decorating functions was just something that people occasionally did, but it wasn't a specific "concept". I'd argue that method injection (to use your phrase) isn't sufficiently important to warrant promotion to language syntax. I will say, though, that you're right that we've over-reacted a bit to the monkeypatching use case. Although maybe that's because no-one can think of many *other* use cases that they'd need the new syntax for :-) Paul

I will say, though, that you're right that we've over-reacted a bit to the monkeypatching use case. Although maybe that's because no-one can think of many *other* use cases that they'd need the new syntax for :-) Paul Hi Paul, I believe at least two other use cases than monkey patching have been mentioned already: 1. Allowing the class to be used in the method's header, f.e. for typing and decorators: @decorate(MyClass) def MyClass.method(self, other: MyClass) -> List[MyClass]: ... This is useful since you can't refer the class itself inside of its body. At the moment the way to use typing is to write the class's name as a string... It feels awful. 2. To register callbacks to objects, i.e. plain out set an attribute for an instance. I've used the menu example above: class Menu: def __init__(self, items=None, select_callback=None): self.items = items if items is not None else [] self.select_callback = select_callback my_menu = Menu(['Pizza', 'Cake', 'Pasta']) def my_menu.select_callback(item_index): if item_index == 0: # Pizza serve_food(pizza) else: # Cake or Pasta ... This is just one example of using it to set an instance's variable to a callback. It's just shorthand for: def select_callback(item_index): ... my_menu.select_callback = select_callback This reads much easier and saves us from typing the same thing three times (see decorators).

I think the proposal, so far, seems to confuse two separate things. One is attaching a method to a class after definition. The second is attaching a method to an instance after creation. Or at least it is unclear to me which of those is the intention, since both seem to occur in the examples. Or maybe it's both, but those feel like fairly different use cases. Which is to say, we really need a PEP. As it stands, I'm somewhere around -0.75 on the idea. @decorate(MyClass)
def MyClass.method(self, other: MyClass) -> List[MyClass]:
In this case, the syntax is 100% superfluous. One can simply write: @decorate(MyClass) def method(self, other: MyClass) -> List[MyClass]: The class is already mentioned in the decorator. If the intention is to add the method to the class, that's fine, and something a decorator can do. Perhaps the spelling for this decorator-factory could be `enhance`. Or more verbosely `inject_method`. Spelling aside, the syntax adds nothing. my_menu = Menu(['Pizza', 'Cake', 'Pasta'])
Attaching to the instance is fine too. But I prefer the current spelling so far: my_menu1 = Menu(['Pizza', 'Cake', 'Pasta']) my_menu2 = Menu(...) def callback1(self, ...): ... def callback2(self, ...): ... my_menu1.callback = callback2 my_menu2.callback = callback1 Under the current approach, you can flexibly define callbacks outside of the scope of any particular instance or class, and attach them as needed to instances. Obviously the new syntax would not *remove* this option, but it would cover only a narrow subset of what we can already do... and the way we do it now feels much better self-documenting as to intent. Yours, David... -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

I think the proposal, so far, seems to confuse two separate things. One is attaching a method to a class after definition. The second is attaching a method to an instance after creation. Or at least it is unclear to me which of those is the intention, since both seem to occur in the examples. Or maybe it's both, but those feel like fairly different use cases. Aren't they the same though? Remember that classes are instances of type and methods are just their attributes. We're simply using setattr() in both cases: with instances, and with classes (=instances). @decorate(MyClass)
def MyClass.method(self, other: MyClass) -> List[MyClass]:
In this case, the syntax is 100% superfluous. One can simply write: @decorate(MyClass) def method(self, other: MyClass) -> List[MyClass]: The class is already mentioned in the decorator. If the intention is to add the method to the class, that's fine, and something a decorator can do. Perhaps the spelling for this decorator-factory could be `enhance`. Or more verbosely `inject_method`. Spelling aside, the syntax adds nothing. I think you missed the point, the decorator was just an example and has arbitary functionality. The point is that you can not refer the class itself in its body, so you can't do either of these methods: class Foo: def concenate(self, other: Foo) -> Foo: ... @just_an_example_decorator(mapper=Foo) def map(self) -> dict: ... Because Foo is not defined at the time of executing the function headers. The proposed feature would allow you to easily define these after the class definition and allow refering to the class directly. my_menu = Menu(['Pizza', 'Cake', 'Pasta'])
Attaching to the instance is fine too. But I prefer the current spelling so far: my_menu1 = Menu(['Pizza', 'Cake', 'Pasta']) my_menu2 = Menu(...) def callback1(self, ...): ... def callback2(self, ...): ... my_menu1.callback = callback2 my_menu2.callback = callback1 I don't, it is repeating the variable name three times. I don't see how this differs from decorator syntax, do you prefer the old way on that too, or am I missing something? Under the current approach, you can flexibly define callbacks outside of the scope of any particular instance or class, and attach them as needed to instances. Obviously the new syntax would not *remove* this option, but it would cover only a narrow subset of what we can already do... and the way we do it now feels much better self-documenting I think you answered yourself here, this would not remove the existing flexible way. Just like @decorator syntax didn't remove the more flexible way. Honestly this is in my opinion almost one-to-one comparable with decorator syntax, and I don't think anyone here dares to claim decorators aren't awesome. - Markus

I haven't repeated any name. Notice that '.callback' is different from 'callback1' or 'callback2'. That's exactly the idea—I can attach *arbitrary* callbacks later on to the '.callback' attribute.
I think
But we already *have* decorators! Here's a nice factory for them: def attach_to(thing, name=None): def decorator(fn): if name is None: name = fn.__name__ setattr(thing, name, fn) return decorator This does everything you are asking for, e.g.: my_menu = Menu() @attach_to(my_menu) def callback(self, ...) ... I got extra fancy with two lines to allow you to either use the same name as the function itself or pick a custom name for the attribute. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

Oh, I probably want `return fn` inside my inner decorator. Otherwise, the defined name gets bound to None in the global scope. I'm not sure, maybe that's better... but most likely we should leave the name for other users. I just wrote it without testing. On Sun, Feb 12, 2017 at 10:19 AM, David Mertz <mertz@gnosis.cx> wrote:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

On 12 February 2017 at 16:51, Markus Meskanen <markusmeskanen@gmail.com> wrote:
Hi Paul, I believe at least two other use cases than monkey patching have been mentioned already:
My point was that people couldn't think of use cases *they* would need the syntax for. Personally, I'd never use the new syntax for the 2 examples you gave. I don't know if your examples are from real-world code, but they feel artificial to me (the callback one less so, but people have been using callbacks for years without missing this syntax). This is just one example of using it to set an instance's variable to a callback. It's just shorthand for:
This reads much easier
That's personal opinion, and there's a lot of disagreement on this point.
and saves us from typing the same thing three times (see decorators).
That's an objective benefit, sure. I don't think it's major in itself, but that's just *my* opinion :-) You could of course use a shorter name for the function, if it matters to you (it doesn't *have* to be the same as the attribute name). Anyway, let's wait for a PEP that addresses all of the points raised in this thread. Paul

On Sun, Feb 12, 2017 at 11:51 AM, Markus Meskanen <markusmeskanen@gmail.com> wrote:
One issue that has been overlooked so far in this thread is that hypothetical use cases are not as important as real-world use cases. One way that PEPs can demonstrate real-world relevance is by demonstrating the effect on some important libraries, e.g. the standard library. For example, asyncio.Future (and concurrent.futures.Future) has a list of callbacks and the API has add_done_callback() and remove_done_callback() functions for manipulating the callback list. The proposed syntax doesn't cooperate with these callbacks: f = asyncio.Future() def my_callback(x): ... f.add_done_callback(my_callback) How should I write this using the proposed syntax? If the proposal doesn't generalize well enough to cover existing callback patterns in Python's own standard library, then that is a significant weakness. Please keep this in mind as you write the PEP.

On 12 February 2017 at 14:51, Markus Meskanen <markusmeskanen@gmail.com> wrote:
You realize now that if we accept this change, and given your example, any "well behaved" Python code with markup will in a couple months required to be like class MyClass: """Docstring.""" def MyClass.__init__(self: MyClass, ...) -> None: ... # add other methods here. And all it will take is some bureaucratic minded person to put that as default option in some highly used linter, like the one that used-to-be-known-as-pep8. (And hint: what do you think is the mind orientation of contributors to linter code? :-) ) As a developer constrained to silly rules in automatic linters (like nazi-counting the number of blank lines everywhere) due to project manager "it's simples to just stand by the linters defaults" I feel quire worried about that. So, no, strings for type hinting are much less awful than effectively killing the class body in big projects. Not that this is much more serious than the worries about def x["fnord"][5]["gnorts"]method(self, bla): ... Which will never be used in sane code anyways. It is the real, present, danger of of having mandates in whole projects that all methods be defined outside the class body just because of "clean type-hinting". I now am much, much, more scared of this proposal than before, and I was already at -1 . Please, just let this R.I.P. js -><-

On Sun, Feb 12, 2017 at 05:01:58PM -0200, Joao S. O. Bueno wrote:
This is pure and unadulterated FUD. Nobody is going to use this as the standard way of writing classes. That would be silly: you end up repeating the class name over and over and over again. And to say that this will happen "in a couple [of] months" is totally unrealistic. Although, I suppose that if the entire Python community did drop 2.7-3.6 and move to 3.7 within just one or two months so they could use this syntax, that would certainly vindicate the (hypothetical) decision to add this syntax. But honestly, no. This is not going to happen. .Net VB and C# have something like this, as does Lua, and people still write classes the ordinary way 99.99% of the time. The chances of this becoming the required, or even the recommended, way to write methods is much less than the chances of President Trump introducing Sharia law to the United States.
And all it will take is some bureaucratic minded person to put that as default option in some highly used linter, like the one that used-to-be-known-as-pep8.
Do you *really* think that a linter that used to be called "PEP8" is going to require as a default syntax which (1) doesn't work before Python 3.7 at the earliest, and (2) has no support in PEP-8? It's one thing to question whether this feature is useful enough to be worth adding. It's another to make panicky claims that the Sky Will Fall if it is accepted. -- Steve

On 13 February 2017 at 00:55, Steven D'Aprano <steve@pearwood.info> wrote:
Sorry - but the I just pointed the effect. The person saying that would start writing classes this way is the grand-parent poster: On 12 February 2017 at 14:51, Markus Meskanen <markusmeskanen@gmail.com> wrote:
You are correct in your message, and thank you for calming me down, - but one thing remains: I was really scared by the grand parent poster - and I still prefer this possibility would not exist. (And yes, I have code in which I needed doing what is proposed: the extra assignment line did not hurt me at all) js -><-

On Sun, Feb 12, 2017, at 21:55, Steven D'Aprano wrote:
The VB/C# thing you are referring to is, I assume, extension methods. But they're really very different when you look at it. Extension methods are only used when the namespace containing them has been imported, and are based on the static type of the object they are being called on. They also have no access to the object's private members. Python doesn't have static types and doesn't have private members, and using this would make a real modification to the type the method is being added to rather than relying on namespaces being imported, so there would be fewer barriers to "use this for everything" than "use extension methods for everything in C#".

On 2017-02-12 14:01, Joao S. O. Bueno wrote:
I am for method-outside-of-class form: If it is allowed, I will use it extensively: * instance methods and extension methods have the same form * less lines between the line you are looking at and the name of the class * explicit class name helps with searching for methods * reduces indentation thanks

Generally speaking, I'm +1 on this idea, I think it would make code more readable, especially for tools like IDEs. I just wanted to ask: can someone point me to the reason Python doesn't support referencing a class inside it's own definition? It seems like that would solve some of the cases discussed here, and with Type hinting that seems like something that maybe should be considered?

For whatever weight my opinion holds, I'm +0 on this one. In my estimation, in an ideal world it seems like: class Foo(object): def bar(self): """Bar!""" # Identical to: class Foo(object): pass def Foo.bar(self): """Bar!""" But I think that's going to be hard to achieve given implicit binding of `super` (as some have already mentioned) and more mind-bendingly when user-defined metaclasses are in play. Indeed, with metaclasses, it seems like it become impossible to actually guarantee the equality of the above two blocks of code. Maybe the PEP writers are OK with that, but that should be decided at the outset... Also note that if users start adopting this as their default mode of class creation (rather than just *class extending*), code-folding in a lot of IDEs won't handle it gracefully (at least not for quite a while). On Mon, Feb 13, 2017 at 11:32 AM, Joseph Hackman <josephhackman@gmail.com> wrote:
-- [image: pattern-sig.png] Matt Gilson // SOFTWARE ENGINEER E: matt@getpattern.com // P: 603.892.7736 We’re looking for beta testers. Go here <https://www.getpattern.com/meetpattern> to sign up!

On Mon, Feb 13, 2017 at 11:50:09AM -0800, Matt Gilson wrote:
I think that this is too high a bar to reach (pun not intended). A metaclass can do anything it likes to the methods in the class, and injecting a method after the class already exists is not necessarily the same as including it in the initial namespace argument passed to the metaclass. I think a more reasonable bar is to have def Foo.bar(self): ... equivalent to def bar(self): ... Foo.bar = bar # Foo is a class del bar except that the usual class magic like setting __qualname__, super() etc will work. That feels doable. For instances, the invariant should be slightly different: def bar(self): ... foo.bar = types.MethodType(bar, foo) # foo is an instance del bar
Indeed.
Why would people use this as the default mode of class creation? I mean, sure there's always that *one guy* who insists on their own weird idiosyncratic way of doing things. I know somebody who refuses to use for loops, and writes all his loops using while. But I can't see this becoming a widespread practice. We all have our quirks, but most of our quirks are not that quirky. -- Steve

On 13.02.2017 20:32, Joseph Hackman wrote:
I just wanted to ask: can someone point me to the reason Python doesn't support referencing a class inside it's own definition? It seems like that would solve some of the cases discussed here, and with Type hinting that seems like something that maybe should be considered?
The class doesn't exist yet, while Python is running the code in its definition block. You can play some tricks with meta classes exposing a .__prepare__() method. This will receive the name of the to-be-created class and allows returning a custom namespace in which the code is run. https://docs.python.org/3.6/reference/datamodel.html#preparing-the-class-nam... The meta class docs have more details on how all this works: https://docs.python.org/3.6/reference/datamodel.html#metaclasses -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 13 2017)
::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/

On Mon, Feb 13, 2017 at 02:32:33PM -0500, Joseph Hackman wrote:
The simple answer is: since the class doesn't exist yet, you cannot refer to it. The class name is just a regular name: py> MyClass = 'something else' py> class MyClass: ... print(MyClass) ... something else so the interpreter would need to provide some special-cased magic inside the class body to make it work as you expect. That may be a good idea, but it is a separate issue from this. -- Steve

On 12 February 2017 at 12:38, Paul Moore <p.f.moore@gmail.com> wrote:
Note that true method injection would *NOT* be the same as binding a callable as a class attribute after the fact: - attribute assignment doesn't modify __name__ - attribute assignment doesn't modify __qualname__ - attribute assignment doesn't call __set_owner__ - attribute assignment doesn't adjust the __class__ cell reference Any method injection syntax worthy of the name would need to do those things (probably via a new __setdescriptor__ magic method that is a counterpart to PEP 447's __getdescriptor__).
I'd argue that method injection (to use your phrase) isn't sufficiently important to warrant promotion to language syntax.
There's a lot to be said for implementing mixin behaviour by way of definition time method injection rather than via MRO traversal when looking up method names (although __init_subclass__ took away one of the arguments in favour of it, since mixins can check their invariants at definition time now).
Method injection is most attractive to me as a potential alternative to mixin classes that has fewer runtime side effects by moving more of the work to class definition time. More philosophically though, it offends my language design sensibilities that we have so much magic bound up in class definitions that we don't expose for procedural access post-definition time - there's a whole lot of behaviours that "just happen" when a method is defined lexically inside a class body that can't readily be emulated for callables that are defined outside it. However, even with that, I'm still only +0 on the idea - if folks really want it, `types.new_class` can already be used to creatively to address most of these things, and it's not exactly a problem that comes up very often in practice. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 12 February 2017 at 22:29, Nick Coghlan <ncoghlan@gmail.com> wrote:
I'll also note that much of what I'm talking about there could be exposed as a types.bind_descriptor() function that implemented the various adjustments (rebinding __class__ references is tricky though, since the function with a bound closure variable might be hidden inside another descriptor, like property) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sun, Feb 12, 2017 at 10:29:10PM +0100, Nick Coghlan wrote: [...]
If the OP is willing to write a PEP, I think it is worth taking a three-part approach: - expose the class definition magic that Nick refers to; - which will allow writing a proper inject_method() decorator; - or allow def Class.method syntax. I think I would prefer def Class.method ... over @inject_method(Class) def method ... del method but given how high the barrier to new syntax is, perhaps we should be willing to take the decorator approach and leave the syntax for the future, once people have got used to the idea that extension methods won't cause the fall of civilization as we know it :-)
Swift and Objective-C users might, I think, disagree with that: they even have a term for this, "swizzling". This is part of the "Interceptor" design pattern: https://en.wikipedia.org/wiki/Interceptor_pattern -- Steve

On Fri, Feb 10, 2017 at 06:17:54PM +0200, Markus Meskanen wrote:
Well yes, but I think you're a bit too fast on labeling it a mistake to use monkey patching...
More importantly, I think we're being a bit too quick to label this technique "monkey-patching" at all. Monkey-patching (or MP for brevity) implies making modifications to some arbitrary *uncooperative* class (or instance). When you're plugging electrodes into a monkey's brain, the monkey has no say in it. This proposed syntax can, of course, be used that way, but Python is already a "consenting adults" language and already has setattr: setattr(some_class, 'get_shrubbery', get_shrubbery) which is all you need to enable MP for good or evil. There have been a few times where I would have used this syntax if it had been available, and none of them were MP. They were injecting methods into classes I controlled. I suspect that this technique wouldn't feel so bad if we had a proper, respectable sounding "design pattern" name for it, like "method injection" or something. I expect that the only reason there is no name for this is that Java doesn't allow it. (I think.) So I'm going to call it "method injection". I don't think there's any evidence that slightly cleaner syntax for method injection will encourage MP. We already have clean syntax to inject arbitrary attributes (including methods made with lambda): TheClass.method = lambda self: ... and I don't think there's an epidemic of MP going on. -- Steve

Chris Angelico wrote:
Which is why these proposals always seem to gravitate to "anything you can assign to",
There might be some parsing difficulties with that, e.g. def foo(x)[5](y, z): ... That should be acceptable, because foo(x)[5] is something assignable, but foo(x) looks like the beginning of the definition of a function called foo. I'm not sure whether the parser would cope with that. -- Greg

On Sat, Feb 11, 2017, at 00:33, Greg Ewing wrote:
We could require parentheses to be used anywhere the grammar otherwise couldn't handle it, like yielding a tuple from a generator expression. def (whatever)(args): This does raise the question though of what the function's name/qualname would be. It's cosmetic, but it's also the only real difference between def and an assignment *now*, so it's worth considering. In the case where the last element of the expression is an attribute, the name would simply be the attribute, but would the class portion of the qualname (and the name when it's not an attribute) need to depend on the runtime value of what is being assigned, or would it simply use a string of exactly "foo(x)[5]"?

On Sun, Feb 12, 2017 at 3:38 PM, Steven D'Aprano <steve@pearwood.info> wrote:
So you think the language should prevent silly assignments?
Given that Python is happy to do these kinds of assignments in 'for' statements, I don't see any reason to prevent them in 'def' statements. It's not the language's job to prevent abuse; at best, that's a job for a style guide. ChrisA

On Sun, Feb 12, 2017 at 03:50:03PM +1100, Chris Angelico wrote:
On a case-by-case basis, of course.
I have no idea what that does except by studying the code with great care and essentially running it in my own mental Python interpreter. Should it be prohibited? Not now, that would break backwards compatibility. If it were 1991 or thereabouts again, and Python 0.1 was newly released, and somebody suggested an enhancement to the language that would specifically allow that awfulness, would you be in favour of allowing it? If it were 1991, I'd seriously consider arguing that the loop assignment target should be restricted to a simple name or tuple of names. It's one thing to say "this abomination is allowed because of historical reasons", and another to say "I think your proposal isn't general enough. We should generalise your nice, clean, simple proposal to something nobody in their right mind would ever use!" In fact, if I were more cynical, I'd wonder whether you were trying to sabotage this proposal by over-generalising it to something that has no good use-cases. *multiple smileys* There's a lot of this sort of thing on Python-Ideas: "I think it would be good if Python included a stapler, as a lightweight, quick and easy way to join sheets of paper." "Excellent idea! I think the stapler should include a drill attachment, a sledge hammer and a crowbar, in case you wish to staple the paper to a concrete slab. You could use the drill with a masonry bit to drill into the concrete slab, then you hammer the extra-giant-size staple through the paper and the drill holes. Of course you'll need to use the crowbar to flip the slab upside down so you can hammer the other side of the staple flat. The slab might be thicker than your drill bit, so the stapler also needs X-ray imaging equipment so you can line up the holes you drill from each side and ensure they meet up correctly." *wink*
For many years, preventing this sort of abuse is exactly what the language has done. This proposal is to introduce a *very slight* loosening of the restriction, not tear the whole thing down. There is plenty of good precedent for restricting assignment targets: py> errors = [None] py> try: ... pass ... except Exception as errors[0]: File "<stdin>", line 3 except Exception as errors[0]: ^ SyntaxError: invalid syntax py> import math as mymodules[-1] File "<stdin>", line 1 import math as mymodules[-1] ^ SyntaxError: invalid syntax And similar restrictions on decorators: py> @decorators['key'] File "<stdin>", line 1 @decorators['key'] ^ SyntaxError: invalid syntax Its easy to loosen the restriction later if necessary, and all but impossible to tighten it up again if the original decision turns out to be a mistake. My view regarding syntax changes is, what is the *smallest* change to syntax that will satisfy the use-case? Not the broadest or most general. It would be different if you had concrete use-cases for the generalisation to any arbitrary assignment target. But as it stands, it is a clear case of YAGNI, and it complicates the question of what __name__ and __qualname__ should be set to. -- Steve

On Fri, Feb 10, 2017 at 9:45 PM, Markus Meskanen <markusmeskanen@gmail.com> wrote:
Every now and then, we get a proposal along these lines. I think it's about time a PEP got written. The usual way this is explained is that a function name can be anything you can assign to. Currently, a function has to have a simple name, and it then gets created with that as its __name__ and bound to that name in the current namespace (module, class, or function). To achieve what you're looking for, the syntax would be defined in terms of assignment, same as a 'for' loop's iteration variable is: # Perfectly legal for spam.ham in iter: pass # Not currently legal def ham.spam(): pass Markus, do you want to head this up? I'll help out with any editorial work you have trouble with (as a PEP editor, I can assign it a number and so on). Considerations: * What would the __name__ be? In "def ham.spam():", is the name "spam" or "ham.spam"? Or say you have "def x[0]():" - is the name "x[0]" or something else? * Are there any syntactic ambiguities? Any special restrictions? * Exactly what grammar token would be used? Currently NAME; might become 'test'? * Will there be any possible backward incompatibilities? Create a pull request against https://github.com/python/peps - looks like the next number is 542. Any questions, I'm happy to help. ChrisA

On Fri, Feb 10, 2017 at 10:05:30PM +1100, Chris Angelico wrote:
* What would the __name__ be? In "def ham.spam():", is the name "spam" or "ham.spam"?
"spam" of course, just like it is now: py> class Ham: ... def spam(self): ... ... ... py> py> Ham.spam.__name__ 'spam' You might be thinking of __qualname__: py> Ham.spam.__qualname__ 'Ham.spam'
Or say you have "def x[0]():" - is the name "x[0]" or something else?
I wouldn't allow that. I feel that "any assignment target at all" is an over-generalisation, a case of YAGNI. It is relatively easy to change our mind and add additional cases in the future, but very difficult to remove them if they turn out to be a mistake. My intuition tells me that we should allow : def name dot name (args): possibly even more than one dot: def name dot name dot name ... (args): but no additional cases: # syntax error def spam[0]function(): ... -- Steve

I am definetelly -1 to this idea. But since you are discussing this seriously, one nice thing is to recall how Javascript does that: `function <name> () ` is an expression that returns the created function, and thus can be assigned to anything on the left side. Of course, that would throw us back to a way of thinking of inline definition of multiline functions - which is another requested and unresolved thing in Python. (But we might require the `def` statement to still be aligned, at least style-wise, and require people to write Foo.foo =\ def (self, ...): ... ) That said, this possibility in Javascript is the source of severe inconsistencies in how functions are declared across different libraries and projects, and IMHO, makes reading (and writting) a real pain. (And, as stated above, a two line decorator could make for the patching - it does not need to have such an ugly name as "monkey_patch" - it could be just "assign" instead) js -><- On 10 February 2017 at 09:51, Steven D'Aprano <steve@pearwood.info> wrote:

On Fri, Feb 10, 2017 at 02:28:25PM +0200, Markus Meskanen wrote:
I've started working on a PEP for this since most people seem to be for it.
I don't know how you get "most people" -- there's only been a handful of responses in the few hours since the original post. And apart from one explicit -1, I read most of them as neutral, not in favour. Of course you are perfectly entitled to start work on a PEP at any time, but don't get your hopes up. I'm one of the neutral parties, perhaps just a tiny bit positive +0, but only for the original proposal. I am -1000 on allowing arbitrary assignment targets. I believe that the cost in readability far outweighs the usefulness of allowing things like: def mydict[key].attr[-1](arg): ... -- Steve

Yeah I worded that poorly, more like most people didn't turn me down which I was a bit afraid of.
Do not worry, I will not propose the advanced method, only dot notation! That being said, I don't think it's up to the language if someone wants to write ugly code like that, you can already do way uglier stuff with the existing features. I don't really see people doing this either: mydict[key].attr[-1].append(my_func) So why would they if we suddenly introduce this to functions? Anyways that's not a worry of this to-be PEP. - Markus

Hi all, I would like to add one more generic remark about syntax extensions, regarding something Markus said and which has bothered me before, also related to other syntax proposals. "Decorator approach is no different from doing `Foo.bar = bar` under the function definition I think, except it requires one to figure out what the decorator does first." My point would be that the new syntax *also* requires one to figure out what the new syntax does. And unfortunately, syntax is much less discoverable than decorators. For a decorator, I can do `help(decorator)' or search the python library reference or probably just mouse-hover over the name in my favourite editor/IDE. But if I don't understand the dot in `class foo.bar:', then what? It's probably somewhere buried in the language spec for `class' but realistically I am now going to blight Stackoverflow with my questions. Stephan 2017-02-10 13:13 GMT+01:00 Joao S. O. Bueno <jsbueno@python.org.br>:

On 10 February 2017 at 13:55, Stephan Houben <stephanh42@gmail.com> wrote:
My point would be that the new syntax *also* requires one to figure out what the new syntax does.
This is an extremely good point. It is mentioned when new syntax is proposed (the term often used is "discoverability") but the idea never seems to stick, as people keep forgetting to consider it when proposing new ideas. With this proposal the only thing you can search for is "def", and you're going to mostly find sites that explain the current syntax. So anyone looking for understanding of the new construct will likely end up even more confused after searching than they were before. Markus - if you do write up a PEP, please make sure this point is noted and addressed. Paul

I deeply believe the dot notation is very simple to understand (for the record, it's the default in JS ja Lua and they're not having any issues with it), and I can't think of a situation where someone knows Python well enough to understand decorators with arguments but wouldn't understand the dot notation. We already use the dot notation for normal attributes, so why not use it for attributes in function def? I think it'll be easier to StackOverflow the dot notation as opposed to argumented decorators. And what I meant by "they have to figure out what the decorator does first" is that it varies between every project. You absolutely cannot know for sure what the decorator does until you read through it, meaning you have to go look it up everytime. - Markus

Hi list, I'm quite neutral to this proposition, as it's not a use case I see often myself needing. On Fri, Feb 10, 2017 at 02:55:31PM +0100, Stephan Houben wrote: […]
but this is definitely not a reason to dismiss a proposal. A language is aimed at evolves and introduce new syntax features, and yes, stackoverflow will get questions about it, blog articles written and RTFW updated, so you'll get the info you'll need fastly. Cheers, -- Guyzmo

On 10 February 2017 at 10:45, Markus Meskanen <markusmeskanen@gmail.com> wrote:
In implementation terms, the syntax change is not as minor as you suggest. At the moment, the syntax for a "def" statement is: funcdef ::= [decorators] "def" funcname "(" [parameter_list] ")" ["->" expression] ":" suite funcname ::= identifier You're proposing replacing "identifier" as the definition of a "funcname" with... what? dotted_name might work, but that opens up the possibility of class Foo: pass foo = Foo def foo.a(self): pass (note I'm defining a method on the *instance*, not on the class). Do you want to allow that? What about "def a.b.c.d.e(): pass" (no self as argument, deeply nested instance attribute). Furthermore, once we open up this possibility, I would expect requests for things like func_table = {} func_table["foo"] = lambda a, b: a+b def func_table["bar"] (a,b): return a-b pretty quickly. How would you respond to those? (Setting up function tables is a much more common and reasonable need than monkeypatching classes). Your proposal is clear enough in terms of your intent, but the implementation details are non-trivial. Paul PS Personally, I'm slightly in favour of the idea in principle, but I don't think it's a useful enough addition to warrant having to deal with all the questions I note above.

On 10 February 2017 at 12:16, Chris Angelico <rosuav@gmail.com> wrote:
But what do __name__ and __qualname__ get set to? What happens if you do this at class scope, rather than at module level or inside another function? What happens to the zero-argument super() support at class scope? What happens if you attempt to use zero-argument super() when *not* at class scope? These are *answerable* questions (and injecting the right __class__ cell reference for zero-argument super() support is a compelling technical argument in favour of this feature over ordinary attribute binding operations), but there's a lot more to the proposal than just relaxing a syntactic restriction in the language grammar. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sat, Feb 11, 2017 at 1:16 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
... and are exactly why I asked the OP to write up a PEP. This isn't my proposal, so it's not up to me to make the decisions. For what it's worth, my answers would be: __name__ would be the textual representation of exactly what you typed between "def" and the open parenthesis. __qualname__ would be built the exact same way it currently is, based on that __name__. Zero-argument super() would behave exactly the way it would if you used a simple name. This just changes the assignment, not the creation of the function. So if you're inside a class, you could populate a lookup dictionary with method-like functions. Abuse this, and you're only shooting your own foot. Zero-argument super() outside of a class, just as currently, would be an error. (Whatever kind of error it currently is.) Maybe there are better answers to these questions, I don't know. That's what the PEP's for. ChrisA

On Sat, Feb 11, 2017 at 01:25:40AM +1100, Chris Angelico wrote:
If I'm reading this right, you want this behaviour: class Spam: pass def Spam.func(self): pass assert 'Spam.func' not in Spam.__dict__ assert 'func' in Spam.__dict__ assert Spam.func.__name__ == 'Spam.func' assert Spam.func.__qualname__ == 'Spam.Spam.func' If that's the case, I can only ask... what advantage do you see from this? Because I can see plenty of opportunity for confusion, and no advantage. For what its worth, Lua already has this feature: http://www.lua.org/pil/6.2.html Lib = {} function Lib.foo (x,y) return x + y end If we define that function foo inside the Lib table, and then cause an error, the Lua interpreter tells us the function name:
-- Steve

On Sat, Feb 11, 2017 at 2:25 AM, Steven D'Aprano <steve@pearwood.info> wrote:
I might be wrong about the __name__; that was a response that came from the massively extensive research of "hmm, I think this would be what I'd do". It seems the simplest way to cope with the many possibilities; having __name__ be "func" would work in the dot form, but not others. But that's bikeshedding. ChrisA

On 10 February 2017 at 16:25, Steven D'Aprano <steve@pearwood.info> wrote:
What I would personally hope to see from the proposal is that given: class Spam: pass def Spam.func(self): return __class__ the effective runtime behaviour would be semantically identical to: class Spam: def func(self): return __class__ such that: * __name__ is set based on the method name after the dot * __qualname__ is set based on the __name__ of the given class * __set_owner__ is called after any function decorators are applied * zero-argument super() and other __class__ references work properly from the injected method Potentially, RuntimeError could be raised if the reference before the dot is not to a type instance. If it *doesn't* do that, then I'd be -1 on the proposal, since it doesn't add enough expressiveness to the language to be worth the extra syntax. By contrast, if it *does* do it, then it makes class definitions more decomposable, by providing post-definition access to parts of the machinery that are currently only accessible during the process of defining the class. The use case would be to make it easier to inject descriptors when writing class decorators such that they behave essentially the same as they do when defined in the class body: def my_class_decorator(cls): def cls.injected_method(self): # Just write injected methods the same way you would in a class body return __class__ return cls (Actually doing this may require elevating super and __class__ to true keyword expressions, rather than the pseudo-keywords they are now) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

One thing that I don't think has been mentioned, but that brings me from a +0 to a more negative outlook, is the interaction between this proposal and some of python's existing class-related features, metaclasses and descriptors. That is currently we know that function definition, and even method definition, will not have side effects. This potentially changes that since def Foo.foo(self): ... could be a descriptor. Even if it doesn't, its possible that `Foo.foo` is actually resolved from `Foo._foo`, and so this potentially further confuses the naming considerations. Then we have metaclasses. Prior to this change, it would be fully the monkeypatcher's responsibility to do any metaclass level changes if they were necessary when monkeypatching. However, since we are potentially adding a first class support for certain monkeypatches, It raises a question about some first class way to handle monkeypatched methods. Do we need to provide some kind of method to a metaclass writer that allows them to handle methods that are patched on later? Or does the language still ignore it? --Josh On Fri, Feb 10, 2017 at 12:20 PM Nick Coghlan <ncoghlan@gmail.com> wrote:

If everything was contained right in the same file, this is sanctioning another way to do it (when there should only be one obvious way). If you have multiple modules/packages, horrors can evolve where a class method could be patched in an unknown location by any loaded module (or you could even introduce order-of-import sensitivities). For testing, this can be a necessary evil which is OK so long as the patch is limited/apparent, and some other very narrow cases (setuptools something something?). That said, I don't want their use condoned or eased for fear of proliferation of these "antiprogrammer land mines" that I might trip over in the future. On Fri, Feb 10, 2017 at 12:15 PM, Joshua Morton <joshua.morton13@gmail.com> wrote:

Please keep in mind that this idea was not created to improve monkey patching, it just happens to be one of the side effects due to classes being objects. The main use case is the ability to set an instance's callback function (see the Menu example), and to allow the class being referenced in the function's header; for example in a decorator and during typing. No additional "fancy" features are intended, it would simply replace this: foo = Bar() def f(): ... foo.f = f With syntax sugar, similar to how decorators replaced this: def f(): ... f = decorate(f) On Feb 10, 2017 20:50, "Nick Timkovich" <prometheus235@gmail.com> wrote:

On 02/10/2017 10:48 AM, Nick Timkovich wrote:
If everything was contained right in the same file, this is sanctioning another way to do it (when there should only be one obvious way).
No worries, this way is not obvious.
Folks can still do that nightmare right now. I'm -0.5 on it -- I don't think the payoff is worth the pain. But I'm +1 on writing a PEP -- collect all these pros and cons in one place to save on future discussion. (And (good) PEP writing is a way to earn valuable Python Points!) -- ~Ethan~

Has this REALLY not been discussed and rejected long ago?????
Exactly -- this is obvious enough that it WILL come up again, and I'm sure it has (but my memory gets fuzzy more than a few months back....) It would be great to document it even if it is headed for rejection. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Hi all, For what it's worth, I believe that the "class extension" scenario from Nick can be supported using plain ol' metaclasses. Not sure if this covers all desired capabilities, but at least the super() mechanism works correctly. Syntax is like this: class Foo(metaclass=class_extend(Foo)): ... See: https://gist.github.com/stephanh42/97b47506e5e416f97f5790c070be7878 Stephan 2017-02-10 19:48 GMT+01:00 Nick Timkovich <prometheus235@gmail.com>:

On Fri, Feb 10, 2017 at 9:20 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Yes, this is exactly what I would hope/expect to see. One use case for this functionality is defining classes with an extensive method-based API with a sane dependency graph. For example, consider writing a class like numpy.ndarray <https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html> or pandas.DataFrame <http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html> with dozens of methods. You could argue that using so many methods is an anti-pattern, but nonetheless it's pretty common and hard to avoid in some cases (e.g., for making number-like classes that support arithmetic and comparisons). For obvious reasons, the functionality for these classes does not all live in a single module. But the modules that define helper functions for most methods also depend on the base class, so many of them need to get imported inside method definitions <https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/core/frame.py#L1227> to avoid circular imports. The result is pretty ugly, and files defining the class still get gigantic. An important note is that ideally, we would still have way of indicating that Spam.func should exists in on the Spam class itself, even if it doesn't define the implementation. I suppose an abstractmethod overwritten by the later definition might do the trick, e.g., class Spam(metaclass=ABCMeta): @abstractmethod def func(self): pass def Spam.func(self): return __class__ And finally, it's quite possible that there's a clean metaclass based solution for extending Spam in another file, I just don't know it yet.

On 10Feb2017 1400, Stephan Hoyer wrote:
An abstractfunction should not become a concrete function on the abstract class - the right way to do this is to use a subclass. class SpamBase(metaclass=ABCMeta): @abstractmethod def func(self): pass class Spam(SpamBase): def func(self): return __class__ If you want to define parts of the class in separate modules, use mixins: from myarray.transforms import MyArrayTransformMixin from myarray.arithmetic import MyArrayArithmeticMixin from myarray.constructors import MyArrayConstructorsMixin class MyArray(MyArrayConstructorsMixin, MyArrayArithmeticMixin, MyArrayTransformMixin): pass The big different between these approaches and the proposal is that the proposal does not require both parties to agree on the approach. This is actually a terrible idea, as subclassing or mixing in a class that wasn't meant for it leads to all sorts of trouble unless the end user is very careful. Providing first-class syntax or methods for this discourages carefulness. (Another way of saying it is that directly overriding class members should feel a bit dirty because it *is* a bit dirty.) As Paul said in an earlier email, the best use of non-direct assignment in function definitions is putting it into a dispatch dictionary, and in this case making a decorator is likely cleaner than adding new syntax. But by all means, let's have a PEP. It will simplify the discussion when it comes up in six months again (or whenever the last time this came up was - less than a year, I'm sure). Cheers, Steve

Since votes seem to be being counted and used for debate purposes, I am -1 to anything that encourages or condones people adding functionality to classes outside of the class definition. (Monkeypatching in my mind neither condones or encourages, and most descriptions come with plenty of caveats about how it should be avoided.) My favourite description of object-oriented programming is that it's like "reading a road map through a drinking(/soda/pop) straw". We do not need to tell people that it's okay to make this problem worse by providing first-class tools to do it. Top-posted from my Windows Phone -----Original Message----- From: "Chris Angelico" <rosuav@gmail.com> Sent: 2/10/2017 8:27 To: "Python-Ideas" <python-ideas@python.org> Subject: Re: [Python-ideas] Fwd: Define a method or function attributeoutside of a class with the dot operator On Sat, Feb 11, 2017 at 1:16 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
... and are exactly why I asked the OP to write up a PEP. This isn't my proposal, so it's not up to me to make the decisions. For what it's worth, my answers would be: __name__ would be the textual representation of exactly what you typed between "def" and the open parenthesis. __qualname__ would be built the exact same way it currently is, based on that __name__. Zero-argument super() would behave exactly the way it would if you used a simple name. This just changes the assignment, not the creation of the function. So if you're inside a class, you could populate a lookup dictionary with method-like functions. Abuse this, and you're only shooting your own foot. Zero-argument super() outside of a class, just as currently, would be an error. (Whatever kind of error it currently is.) Maybe there are better answers to these questions, I don't know. That's what the PEP's for. ChrisA _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools? And this wouldn't only be for classes, but for setting instance attributes too (see the Menu example in original mail). - Markus On Fri, Feb 10, 2017 at 5:38 PM, Steve Dower <steve.dower@python.org> wrote:

On 10 February 2017 at 16:09, Markus Meskanen <markusmeskanen@gmail.com> wrote:
But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools?
Because encouraging and making it easier for people to make mistakes is the wrong thing to do, surely? Paul

Well yes, but I think you're a bit too fast on labeling it a mistake to use monkey patching... On Feb 10, 2017 18:15, "Paul Moore" <p.f.moore@gmail.com> wrote: On 10 February 2017 at 16:09, Markus Meskanen <markusmeskanen@gmail.com> wrote:
But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools?
Because encouraging and making it easier for people to make mistakes is the wrong thing to do, surely? Paul

Another point of view: Some call it monkeypatching. Others call it configuration. There's room for both views and I don't see anything wrong with configuration using this kind of feature. Sven On 10.02.2017 17:17, Markus Meskanen wrote:

When you apply the "what if everyone did this" rule, it looks like a bad idea (or alternatively, what if two people who weren't expecting anyone else to do this did it). Monkeypatching is fairly blatantly taking advantage of the object model in a way that is not "supported" and cannot behave well in the context of everyone doing it, whereas inheritance or mixins are safe. Making a dedicated syntax or decorator for patching is saying that we (the language) think you should do it. (The extension_method decorator sends exactly the wrong message about what it's doing.) Enabling a __class__ variable within the scope of the definition would also solve the motivating example, and is less likely to lead to code where you need to review multiple modules and determine whole-program import order to figure out why your calls do not work. Top-posted from my Windows Phone -----Original Message----- From: "Markus Meskanen" <markusmeskanen@gmail.com> Sent: 2/10/2017 10:18 To: "Paul Moore" <p.f.moore@gmail.com> Cc: "Python-Ideas" <python-ideas@python.org>; "Steve Dower" <steve.dower@python.org> Subject: Re: [Python-ideas] Fwd: Define a method or function attributeoutsideof a class with the dot operator Well yes, but I think you're a bit too fast on labeling it a mistake to use monkey patching... On Feb 10, 2017 18:15, "Paul Moore" <p.f.moore@gmail.com> wrote: On 10 February 2017 at 16:09, Markus Meskanen <markusmeskanen@gmail.com> wrote:
But if people are gonna do it anyways with the tools provided (monkey patching), why not provide them with better tools?
Because encouraging and making it easier for people to make mistakes is the wrong thing to do, surely? Paul

On Fri, Feb 10, 2017 at 12:11:46PM -0600, Steve Dower wrote:
When you apply that rule, Python generally fails badly. In theory, Python is the worst possible language to be programming in, because the introspection capabilities are so powerful, the data hiding so feeble, the dynamicism of the language so great that almost anything written in pure-Python can be poked and prodded, bits deleted and new bits inserted. Python doesn't even have constants!!! import math math.pi = 3.0 # change the very geometry of spacetime And yet, in practice this is a problem more in theory than in practice. While you are right to raise this as a possible disadvantage of the proposal ("may ever-so-slightly encourage monkey-patching, by making it seem ever-so-slightly less mucky") I don't think you are right to weigh it as heavily as you appear to be doing. Python has had setattr() forever, and the great majority of Python programmers manage to avoid abusing it.
That's an extremely optimistic view of things. Guido has frequently eluded to the problems with inheritance (you can't just inherit from anything and expect your code to work), and he's hardly the only one that has pointed out that inheritance and OOP hasn't turned out to be the panacea that people hoped. As for mixins, Michele Simionato has written a series of blog posts about the dangers of mixins and multiple inheritance, and suggesting traits as a more restricted and safer alternative. Start here: http://www.artima.com/weblogs/viewpost.jsp?thread=246488
Making a dedicated syntax or decorator for patching is saying that we (the language) think you should do it.
We already have that syntax: anything.name = thing
(The extension_method decorator sends exactly the wrong message about what it's doing.)
Are you refering to a decorator something like this? @extend(TheClass) def method(self, arg): ... assert TheClass.method is method Arguments about what it should be called aside, what is the wrong message you see here?
Enabling a __class__ variable within the scope of the definition would also solve the motivating example,
Can you elaborate on that a bit more? Given the current idiom for injecting a method: class MyClass: ... # Later on... def method(self, arg): ... MyClass.method = method del method where does the __class__ variable fit into this? -- Steve

On 12 February 2017 at 04:37, Steven D'Aprano <steve@pearwood.info> wrote:
And the point here is that we don't need to extend def, because we already have that syntax. Adding new syntax for something that we can already do is generally accepted when the "thing we can already do" is deemed sufficiently important that it's worth making it a language feature in its own right. Decorators are a prime example of this - before the decorator syntax was added, decorating functions was just something that people occasionally did, but it wasn't a specific "concept". I'd argue that method injection (to use your phrase) isn't sufficiently important to warrant promotion to language syntax. I will say, though, that you're right that we've over-reacted a bit to the monkeypatching use case. Although maybe that's because no-one can think of many *other* use cases that they'd need the new syntax for :-) Paul

I will say, though, that you're right that we've over-reacted a bit to the monkeypatching use case. Although maybe that's because no-one can think of many *other* use cases that they'd need the new syntax for :-) Paul Hi Paul, I believe at least two other use cases than monkey patching have been mentioned already: 1. Allowing the class to be used in the method's header, f.e. for typing and decorators: @decorate(MyClass) def MyClass.method(self, other: MyClass) -> List[MyClass]: ... This is useful since you can't refer the class itself inside of its body. At the moment the way to use typing is to write the class's name as a string... It feels awful. 2. To register callbacks to objects, i.e. plain out set an attribute for an instance. I've used the menu example above: class Menu: def __init__(self, items=None, select_callback=None): self.items = items if items is not None else [] self.select_callback = select_callback my_menu = Menu(['Pizza', 'Cake', 'Pasta']) def my_menu.select_callback(item_index): if item_index == 0: # Pizza serve_food(pizza) else: # Cake or Pasta ... This is just one example of using it to set an instance's variable to a callback. It's just shorthand for: def select_callback(item_index): ... my_menu.select_callback = select_callback This reads much easier and saves us from typing the same thing three times (see decorators).

I think the proposal, so far, seems to confuse two separate things. One is attaching a method to a class after definition. The second is attaching a method to an instance after creation. Or at least it is unclear to me which of those is the intention, since both seem to occur in the examples. Or maybe it's both, but those feel like fairly different use cases. Which is to say, we really need a PEP. As it stands, I'm somewhere around -0.75 on the idea. @decorate(MyClass)
def MyClass.method(self, other: MyClass) -> List[MyClass]:
In this case, the syntax is 100% superfluous. One can simply write: @decorate(MyClass) def method(self, other: MyClass) -> List[MyClass]: The class is already mentioned in the decorator. If the intention is to add the method to the class, that's fine, and something a decorator can do. Perhaps the spelling for this decorator-factory could be `enhance`. Or more verbosely `inject_method`. Spelling aside, the syntax adds nothing. my_menu = Menu(['Pizza', 'Cake', 'Pasta'])
Attaching to the instance is fine too. But I prefer the current spelling so far: my_menu1 = Menu(['Pizza', 'Cake', 'Pasta']) my_menu2 = Menu(...) def callback1(self, ...): ... def callback2(self, ...): ... my_menu1.callback = callback2 my_menu2.callback = callback1 Under the current approach, you can flexibly define callbacks outside of the scope of any particular instance or class, and attach them as needed to instances. Obviously the new syntax would not *remove* this option, but it would cover only a narrow subset of what we can already do... and the way we do it now feels much better self-documenting as to intent. Yours, David... -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

I think the proposal, so far, seems to confuse two separate things. One is attaching a method to a class after definition. The second is attaching a method to an instance after creation. Or at least it is unclear to me which of those is the intention, since both seem to occur in the examples. Or maybe it's both, but those feel like fairly different use cases. Aren't they the same though? Remember that classes are instances of type and methods are just their attributes. We're simply using setattr() in both cases: with instances, and with classes (=instances). @decorate(MyClass)
def MyClass.method(self, other: MyClass) -> List[MyClass]:
In this case, the syntax is 100% superfluous. One can simply write: @decorate(MyClass) def method(self, other: MyClass) -> List[MyClass]: The class is already mentioned in the decorator. If the intention is to add the method to the class, that's fine, and something a decorator can do. Perhaps the spelling for this decorator-factory could be `enhance`. Or more verbosely `inject_method`. Spelling aside, the syntax adds nothing. I think you missed the point, the decorator was just an example and has arbitary functionality. The point is that you can not refer the class itself in its body, so you can't do either of these methods: class Foo: def concenate(self, other: Foo) -> Foo: ... @just_an_example_decorator(mapper=Foo) def map(self) -> dict: ... Because Foo is not defined at the time of executing the function headers. The proposed feature would allow you to easily define these after the class definition and allow refering to the class directly. my_menu = Menu(['Pizza', 'Cake', 'Pasta'])
Attaching to the instance is fine too. But I prefer the current spelling so far: my_menu1 = Menu(['Pizza', 'Cake', 'Pasta']) my_menu2 = Menu(...) def callback1(self, ...): ... def callback2(self, ...): ... my_menu1.callback = callback2 my_menu2.callback = callback1 I don't, it is repeating the variable name three times. I don't see how this differs from decorator syntax, do you prefer the old way on that too, or am I missing something? Under the current approach, you can flexibly define callbacks outside of the scope of any particular instance or class, and attach them as needed to instances. Obviously the new syntax would not *remove* this option, but it would cover only a narrow subset of what we can already do... and the way we do it now feels much better self-documenting I think you answered yourself here, this would not remove the existing flexible way. Just like @decorator syntax didn't remove the more flexible way. Honestly this is in my opinion almost one-to-one comparable with decorator syntax, and I don't think anyone here dares to claim decorators aren't awesome. - Markus

I haven't repeated any name. Notice that '.callback' is different from 'callback1' or 'callback2'. That's exactly the idea—I can attach *arbitrary* callbacks later on to the '.callback' attribute.
I think
But we already *have* decorators! Here's a nice factory for them: def attach_to(thing, name=None): def decorator(fn): if name is None: name = fn.__name__ setattr(thing, name, fn) return decorator This does everything you are asking for, e.g.: my_menu = Menu() @attach_to(my_menu) def callback(self, ...) ... I got extra fancy with two lines to allow you to either use the same name as the function itself or pick a custom name for the attribute. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

Oh, I probably want `return fn` inside my inner decorator. Otherwise, the defined name gets bound to None in the global scope. I'm not sure, maybe that's better... but most likely we should leave the name for other users. I just wrote it without testing. On Sun, Feb 12, 2017 at 10:19 AM, David Mertz <mertz@gnosis.cx> wrote:
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.

On 12 February 2017 at 16:51, Markus Meskanen <markusmeskanen@gmail.com> wrote:
Hi Paul, I believe at least two other use cases than monkey patching have been mentioned already:
My point was that people couldn't think of use cases *they* would need the syntax for. Personally, I'd never use the new syntax for the 2 examples you gave. I don't know if your examples are from real-world code, but they feel artificial to me (the callback one less so, but people have been using callbacks for years without missing this syntax). This is just one example of using it to set an instance's variable to a callback. It's just shorthand for:
This reads much easier
That's personal opinion, and there's a lot of disagreement on this point.
and saves us from typing the same thing three times (see decorators).
That's an objective benefit, sure. I don't think it's major in itself, but that's just *my* opinion :-) You could of course use a shorter name for the function, if it matters to you (it doesn't *have* to be the same as the attribute name). Anyway, let's wait for a PEP that addresses all of the points raised in this thread. Paul

On Sun, Feb 12, 2017 at 11:51 AM, Markus Meskanen <markusmeskanen@gmail.com> wrote:
One issue that has been overlooked so far in this thread is that hypothetical use cases are not as important as real-world use cases. One way that PEPs can demonstrate real-world relevance is by demonstrating the effect on some important libraries, e.g. the standard library. For example, asyncio.Future (and concurrent.futures.Future) has a list of callbacks and the API has add_done_callback() and remove_done_callback() functions for manipulating the callback list. The proposed syntax doesn't cooperate with these callbacks: f = asyncio.Future() def my_callback(x): ... f.add_done_callback(my_callback) How should I write this using the proposed syntax? If the proposal doesn't generalize well enough to cover existing callback patterns in Python's own standard library, then that is a significant weakness. Please keep this in mind as you write the PEP.

On 12 February 2017 at 14:51, Markus Meskanen <markusmeskanen@gmail.com> wrote:
You realize now that if we accept this change, and given your example, any "well behaved" Python code with markup will in a couple months required to be like class MyClass: """Docstring.""" def MyClass.__init__(self: MyClass, ...) -> None: ... # add other methods here. And all it will take is some bureaucratic minded person to put that as default option in some highly used linter, like the one that used-to-be-known-as-pep8. (And hint: what do you think is the mind orientation of contributors to linter code? :-) ) As a developer constrained to silly rules in automatic linters (like nazi-counting the number of blank lines everywhere) due to project manager "it's simples to just stand by the linters defaults" I feel quire worried about that. So, no, strings for type hinting are much less awful than effectively killing the class body in big projects. Not that this is much more serious than the worries about def x["fnord"][5]["gnorts"]method(self, bla): ... Which will never be used in sane code anyways. It is the real, present, danger of of having mandates in whole projects that all methods be defined outside the class body just because of "clean type-hinting". I now am much, much, more scared of this proposal than before, and I was already at -1 . Please, just let this R.I.P. js -><-

On Sun, Feb 12, 2017 at 05:01:58PM -0200, Joao S. O. Bueno wrote:
This is pure and unadulterated FUD. Nobody is going to use this as the standard way of writing classes. That would be silly: you end up repeating the class name over and over and over again. And to say that this will happen "in a couple [of] months" is totally unrealistic. Although, I suppose that if the entire Python community did drop 2.7-3.6 and move to 3.7 within just one or two months so they could use this syntax, that would certainly vindicate the (hypothetical) decision to add this syntax. But honestly, no. This is not going to happen. .Net VB and C# have something like this, as does Lua, and people still write classes the ordinary way 99.99% of the time. The chances of this becoming the required, or even the recommended, way to write methods is much less than the chances of President Trump introducing Sharia law to the United States.
And all it will take is some bureaucratic minded person to put that as default option in some highly used linter, like the one that used-to-be-known-as-pep8.
Do you *really* think that a linter that used to be called "PEP8" is going to require as a default syntax which (1) doesn't work before Python 3.7 at the earliest, and (2) has no support in PEP-8? It's one thing to question whether this feature is useful enough to be worth adding. It's another to make panicky claims that the Sky Will Fall if it is accepted. -- Steve

On 13 February 2017 at 00:55, Steven D'Aprano <steve@pearwood.info> wrote:
Sorry - but the I just pointed the effect. The person saying that would start writing classes this way is the grand-parent poster: On 12 February 2017 at 14:51, Markus Meskanen <markusmeskanen@gmail.com> wrote:
You are correct in your message, and thank you for calming me down, - but one thing remains: I was really scared by the grand parent poster - and I still prefer this possibility would not exist. (And yes, I have code in which I needed doing what is proposed: the extra assignment line did not hurt me at all) js -><-

On Sun, Feb 12, 2017, at 21:55, Steven D'Aprano wrote:
The VB/C# thing you are referring to is, I assume, extension methods. But they're really very different when you look at it. Extension methods are only used when the namespace containing them has been imported, and are based on the static type of the object they are being called on. They also have no access to the object's private members. Python doesn't have static types and doesn't have private members, and using this would make a real modification to the type the method is being added to rather than relying on namespaces being imported, so there would be fewer barriers to "use this for everything" than "use extension methods for everything in C#".

On 2017-02-12 14:01, Joao S. O. Bueno wrote:
I am for method-outside-of-class form: If it is allowed, I will use it extensively: * instance methods and extension methods have the same form * less lines between the line you are looking at and the name of the class * explicit class name helps with searching for methods * reduces indentation thanks

Generally speaking, I'm +1 on this idea, I think it would make code more readable, especially for tools like IDEs. I just wanted to ask: can someone point me to the reason Python doesn't support referencing a class inside it's own definition? It seems like that would solve some of the cases discussed here, and with Type hinting that seems like something that maybe should be considered?

For whatever weight my opinion holds, I'm +0 on this one. In my estimation, in an ideal world it seems like: class Foo(object): def bar(self): """Bar!""" # Identical to: class Foo(object): pass def Foo.bar(self): """Bar!""" But I think that's going to be hard to achieve given implicit binding of `super` (as some have already mentioned) and more mind-bendingly when user-defined metaclasses are in play. Indeed, with metaclasses, it seems like it become impossible to actually guarantee the equality of the above two blocks of code. Maybe the PEP writers are OK with that, but that should be decided at the outset... Also note that if users start adopting this as their default mode of class creation (rather than just *class extending*), code-folding in a lot of IDEs won't handle it gracefully (at least not for quite a while). On Mon, Feb 13, 2017 at 11:32 AM, Joseph Hackman <josephhackman@gmail.com> wrote:
-- [image: pattern-sig.png] Matt Gilson // SOFTWARE ENGINEER E: matt@getpattern.com // P: 603.892.7736 We’re looking for beta testers. Go here <https://www.getpattern.com/meetpattern> to sign up!

On Mon, Feb 13, 2017 at 11:50:09AM -0800, Matt Gilson wrote:
I think that this is too high a bar to reach (pun not intended). A metaclass can do anything it likes to the methods in the class, and injecting a method after the class already exists is not necessarily the same as including it in the initial namespace argument passed to the metaclass. I think a more reasonable bar is to have def Foo.bar(self): ... equivalent to def bar(self): ... Foo.bar = bar # Foo is a class del bar except that the usual class magic like setting __qualname__, super() etc will work. That feels doable. For instances, the invariant should be slightly different: def bar(self): ... foo.bar = types.MethodType(bar, foo) # foo is an instance del bar
Indeed.
Why would people use this as the default mode of class creation? I mean, sure there's always that *one guy* who insists on their own weird idiosyncratic way of doing things. I know somebody who refuses to use for loops, and writes all his loops using while. But I can't see this becoming a widespread practice. We all have our quirks, but most of our quirks are not that quirky. -- Steve

On 13.02.2017 20:32, Joseph Hackman wrote:
I just wanted to ask: can someone point me to the reason Python doesn't support referencing a class inside it's own definition? It seems like that would solve some of the cases discussed here, and with Type hinting that seems like something that maybe should be considered?
The class doesn't exist yet, while Python is running the code in its definition block. You can play some tricks with meta classes exposing a .__prepare__() method. This will receive the name of the to-be-created class and allows returning a custom namespace in which the code is run. https://docs.python.org/3.6/reference/datamodel.html#preparing-the-class-nam... The meta class docs have more details on how all this works: https://docs.python.org/3.6/reference/datamodel.html#metaclasses -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Feb 13 2017)
::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/

On Mon, Feb 13, 2017 at 02:32:33PM -0500, Joseph Hackman wrote:
The simple answer is: since the class doesn't exist yet, you cannot refer to it. The class name is just a regular name: py> MyClass = 'something else' py> class MyClass: ... print(MyClass) ... something else so the interpreter would need to provide some special-cased magic inside the class body to make it work as you expect. That may be a good idea, but it is a separate issue from this. -- Steve

On 12 February 2017 at 12:38, Paul Moore <p.f.moore@gmail.com> wrote:
Note that true method injection would *NOT* be the same as binding a callable as a class attribute after the fact: - attribute assignment doesn't modify __name__ - attribute assignment doesn't modify __qualname__ - attribute assignment doesn't call __set_owner__ - attribute assignment doesn't adjust the __class__ cell reference Any method injection syntax worthy of the name would need to do those things (probably via a new __setdescriptor__ magic method that is a counterpart to PEP 447's __getdescriptor__).
I'd argue that method injection (to use your phrase) isn't sufficiently important to warrant promotion to language syntax.
There's a lot to be said for implementing mixin behaviour by way of definition time method injection rather than via MRO traversal when looking up method names (although __init_subclass__ took away one of the arguments in favour of it, since mixins can check their invariants at definition time now).
Method injection is most attractive to me as a potential alternative to mixin classes that has fewer runtime side effects by moving more of the work to class definition time. More philosophically though, it offends my language design sensibilities that we have so much magic bound up in class definitions that we don't expose for procedural access post-definition time - there's a whole lot of behaviours that "just happen" when a method is defined lexically inside a class body that can't readily be emulated for callables that are defined outside it. However, even with that, I'm still only +0 on the idea - if folks really want it, `types.new_class` can already be used to creatively to address most of these things, and it's not exactly a problem that comes up very often in practice. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 12 February 2017 at 22:29, Nick Coghlan <ncoghlan@gmail.com> wrote:
I'll also note that much of what I'm talking about there could be exposed as a types.bind_descriptor() function that implemented the various adjustments (rebinding __class__ references is tricky though, since the function with a bound closure variable might be hidden inside another descriptor, like property) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sun, Feb 12, 2017 at 10:29:10PM +0100, Nick Coghlan wrote: [...]
If the OP is willing to write a PEP, I think it is worth taking a three-part approach: - expose the class definition magic that Nick refers to; - which will allow writing a proper inject_method() decorator; - or allow def Class.method syntax. I think I would prefer def Class.method ... over @inject_method(Class) def method ... del method but given how high the barrier to new syntax is, perhaps we should be willing to take the decorator approach and leave the syntax for the future, once people have got used to the idea that extension methods won't cause the fall of civilization as we know it :-)
Swift and Objective-C users might, I think, disagree with that: they even have a term for this, "swizzling". This is part of the "Interceptor" design pattern: https://en.wikipedia.org/wiki/Interceptor_pattern -- Steve

On Fri, Feb 10, 2017 at 06:17:54PM +0200, Markus Meskanen wrote:
Well yes, but I think you're a bit too fast on labeling it a mistake to use monkey patching...
More importantly, I think we're being a bit too quick to label this technique "monkey-patching" at all. Monkey-patching (or MP for brevity) implies making modifications to some arbitrary *uncooperative* class (or instance). When you're plugging electrodes into a monkey's brain, the monkey has no say in it. This proposed syntax can, of course, be used that way, but Python is already a "consenting adults" language and already has setattr: setattr(some_class, 'get_shrubbery', get_shrubbery) which is all you need to enable MP for good or evil. There have been a few times where I would have used this syntax if it had been available, and none of them were MP. They were injecting methods into classes I controlled. I suspect that this technique wouldn't feel so bad if we had a proper, respectable sounding "design pattern" name for it, like "method injection" or something. I expect that the only reason there is no name for this is that Java doesn't allow it. (I think.) So I'm going to call it "method injection". I don't think there's any evidence that slightly cleaner syntax for method injection will encourage MP. We already have clean syntax to inject arbitrary attributes (including methods made with lambda): TheClass.method = lambda self: ... and I don't think there's an epidemic of MP going on. -- Steve

Chris Angelico wrote:
Which is why these proposals always seem to gravitate to "anything you can assign to",
There might be some parsing difficulties with that, e.g. def foo(x)[5](y, z): ... That should be acceptable, because foo(x)[5] is something assignable, but foo(x) looks like the beginning of the definition of a function called foo. I'm not sure whether the parser would cope with that. -- Greg

On Sat, Feb 11, 2017, at 00:33, Greg Ewing wrote:
We could require parentheses to be used anywhere the grammar otherwise couldn't handle it, like yielding a tuple from a generator expression. def (whatever)(args): This does raise the question though of what the function's name/qualname would be. It's cosmetic, but it's also the only real difference between def and an assignment *now*, so it's worth considering. In the case where the last element of the expression is an attribute, the name would simply be the attribute, but would the class portion of the qualname (and the name when it's not an attribute) need to depend on the runtime value of what is being assigned, or would it simply use a string of exactly "foo(x)[5]"?

On Sun, Feb 12, 2017 at 3:38 PM, Steven D'Aprano <steve@pearwood.info> wrote:
So you think the language should prevent silly assignments?
Given that Python is happy to do these kinds of assignments in 'for' statements, I don't see any reason to prevent them in 'def' statements. It's not the language's job to prevent abuse; at best, that's a job for a style guide. ChrisA

On Sun, Feb 12, 2017 at 03:50:03PM +1100, Chris Angelico wrote:
On a case-by-case basis, of course.
I have no idea what that does except by studying the code with great care and essentially running it in my own mental Python interpreter. Should it be prohibited? Not now, that would break backwards compatibility. If it were 1991 or thereabouts again, and Python 0.1 was newly released, and somebody suggested an enhancement to the language that would specifically allow that awfulness, would you be in favour of allowing it? If it were 1991, I'd seriously consider arguing that the loop assignment target should be restricted to a simple name or tuple of names. It's one thing to say "this abomination is allowed because of historical reasons", and another to say "I think your proposal isn't general enough. We should generalise your nice, clean, simple proposal to something nobody in their right mind would ever use!" In fact, if I were more cynical, I'd wonder whether you were trying to sabotage this proposal by over-generalising it to something that has no good use-cases. *multiple smileys* There's a lot of this sort of thing on Python-Ideas: "I think it would be good if Python included a stapler, as a lightweight, quick and easy way to join sheets of paper." "Excellent idea! I think the stapler should include a drill attachment, a sledge hammer and a crowbar, in case you wish to staple the paper to a concrete slab. You could use the drill with a masonry bit to drill into the concrete slab, then you hammer the extra-giant-size staple through the paper and the drill holes. Of course you'll need to use the crowbar to flip the slab upside down so you can hammer the other side of the staple flat. The slab might be thicker than your drill bit, so the stapler also needs X-ray imaging equipment so you can line up the holes you drill from each side and ensure they meet up correctly." *wink*
For many years, preventing this sort of abuse is exactly what the language has done. This proposal is to introduce a *very slight* loosening of the restriction, not tear the whole thing down. There is plenty of good precedent for restricting assignment targets: py> errors = [None] py> try: ... pass ... except Exception as errors[0]: File "<stdin>", line 3 except Exception as errors[0]: ^ SyntaxError: invalid syntax py> import math as mymodules[-1] File "<stdin>", line 1 import math as mymodules[-1] ^ SyntaxError: invalid syntax And similar restrictions on decorators: py> @decorators['key'] File "<stdin>", line 1 @decorators['key'] ^ SyntaxError: invalid syntax Its easy to loosen the restriction later if necessary, and all but impossible to tighten it up again if the original decision turns out to be a mistake. My view regarding syntax changes is, what is the *smallest* change to syntax that will satisfy the use-case? Not the broadest or most general. It would be different if you had concrete use-cases for the generalisation to any arbitrary assignment target. But as it stands, it is a clear case of YAGNI, and it complicates the question of what __name__ and __qualname__ should be set to. -- Steve
participants (24)
-
Chris Angelico
-
Chris Barker
-
David Mertz
-
Ethan Furman
-
Greg Ewing
-
Guyzmo
-
Joao S. O. Bueno
-
Joseph Hackman
-
Joshua Morton
-
Kyle Lahnakoski
-
M.-A. Lemburg
-
Mark E. Haase
-
Markus Meskanen
-
Matt Gilson
-
Nick Coghlan
-
Nick Timkovich
-
Paul Moore
-
Random832
-
Stephan Houben
-
Stephan Hoyer
-
Steve Dower
-
Steven D'Aprano
-
Sven R. Kunze
-
Thomas Kluyver