
After working with Kotlin for a while I really started to like the idea of extension methods. I delivers a great way to extend the language without having to add features to the language itself. As a practical example I would like to take a first item from a list my_list = [] first = my_list[0] # This will obviously throw an error In this case it would be great to create an extension method like def list.first_or_none(self): if len(self): return self[0] return None Or ... def list.first_or_else(self, fallback_value): if len(self): return self[0] return fallback_value Then, we could retrieve a value from the list: from my_list_extensions import first_or_none, first_or_else first = my_list.first_or_none() Or ... first = my_list.first_or_else(0) Some of the main advantages (not an exhaustive list): - It also provides a way to add extension methods before adding methods to the standard library, like in PEP 616 - Classes can be extended without using inheritance - Easier for IDE's to know what methods are available for a type, therefore delivering auto-completion. - It would also become easier to extend classes from other libraries next to the standard library - It will be much easier to create methods on a class that can be chained, which leads to less intermediate variables and (in my opinion) better readability. Sometimes these are called fluent builders or fluent interfaces. my_list.filter(lambda x: x >= 0).map(lambda x: str(x)) instead of [str(x) for x in my_list if x >= 0]. (ok, in hindsight this looks better with the list comprehension, but there are much more complicated examples that would benefit) A more comprehensive example would be an html builder. Let's say I created a library that builds html from python code, with a nice builder interface: class Html: def __init__(self): self.__items = [] def add_p(self, text): self.__items.append(text) return self def add_h1(self, text): self.__items.append(text) return self def add_div(self, items): self.__items.append(items) return self To add methods (which is very likely in a class like this), I would have to create a subclass, or create a pull request with the author (if the author approves). In all languages that I know of that have extension methods the most given argument is better code readability.

What objection do you have to creating a subclass? Adding methods to e.g. list could give you problems if you import 2 modules that each modify list and the modifications are incompatible. It could conceivably add bugs to code that uses list in a standard way. Best wishes Rob Cliffe On 20/06/2021 13:18, Johan Vergeer wrote:

On 2021-06-20 at 12:18:24 -0000, Johan Vergeer <johanvergeer@gmail.com> wrote:
I disagree with the premise that such a thing is great, although it does have limited use cases (mostly revolving around bugs, whether you know about the bug or are tracking it down). That said, Python already allows it: >>> class C: pass >>> c = C() >>> C.f = lambda self, *a: a # add method f to class C >>> c.f(5, 6) (5, 6) See also <https://en.wikipedia.org/wiki/Monkey_patch>.

The technique you are calling "extension methods" is known as "monkey-patching" in Python and Ruby. With respect to a fine language, Kotlin, it doesn't have the user-base of either Python or Ruby. Python does not allow monkey-patching builtin classes, but Ruby does: https://avdi.codes/why-monkeypatching-is-destroying-ruby/ A cautionary tale. So what does Kotlin do to prevent that sort of thing? Can you use a regular function that takes a list as argument, instead of monkey-patching the list class to add a method? The beauty of a function is that every module is independent, so they can add their own list extensions (as functions) without stomping over each other. Whereas if they add them directly on to the list class itself, they can clash. As for code readability, I don't think that this: mylist.first_or_none() is particularly more readable than: first_or_none(mylist) Not everything has to be wrapped in a class. https://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html -- Steve

While IDK about Kotlin's extension methods, I do know that C# (A highly popular programming language) also has extension methods so I'll talk about those. In C#, extension methods are plain static functions defined in a plain static class, the only key difference between normal static methods and these extension methods is that the first argument is prefixed with the "this" keyword. You can invoke extension methods as if they were defined on the type they're extending without actually changing the type, the only requirement is that the namespace that the class containing the method is used (via "using"). Infact, .NET ships with a bunch of extension methods just for the IEnumerable and IEnumerable<T> interfaces in System.LINQ.

Hi William, Thanks for the description of C# extension methods, but I think that like Britons and Americans, we're in danger of being divided by a common language. (To paraphrase Churchill.) On Sun, Jun 20, 2021 at 10:56:37PM -0000, William Pickard wrote:
I'm unsure what you mean by "static functions" and whether they are the same thing as "static methods". I believe that a static method is something different in Python and C#. When you say *prefixed by*, surely you don't actually mean a literal prefix? Using Python syntax: # I want the first argument to be called "param". def extension(thisparam): because that would be silly *wink* so I guess that you mean this: def extension(this, param): except that in Python, we spell it "self" rather than "this", and it is not a keyword. So as far as the interpreter is concerned, whether spelled as "this", "self" or something else, that's just a regular function that takes two parameters, neither of which has any special or pre-defined meaning.
Let me see if I can interpret that, in Python terms. Suppose we have a module X.py which defines a list extension method imaginatively called "extension". In order to use that method from my module, I would have to say: using X first, after which: hasattr(list, 'extension') would return True. Otherwise, it would continue to return False. So modules have to opt-in to use the extension method, rather than having the methods foist on them as in monkey-patching. Am I close? I think this sounds much more promising, since it avoids the downsides of monkey-patching. The problem is that in a language like C#, and I presume Koitlin, methods are resolved at compile time, but in Python they are resolved at runtime. So `using` would have to be some sort of registration system, which would require every attribute lookup to go through the registration system looking for only those extension methods which are being used by the current module. I expect that would be slow and complex. But maybe I'm just not clever enough to think of an efficient way of handling it :-( -- Steve

Your assumption about requiring some form of registration system for Python to implement extensions is correct as Roslyn (The C# compiler) resolves them at compile time (as long as the namespace is imported/the class is in the "global" namespace) When I said static function, it's basically me saying static method, and they're the same as a method in python with "@staticmethod" applied to it. here's a sample "prototype" of a System.Linq extension method (a very useful one imo): "namespace System.Linq { public static class Enumerable { public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> enumerable, Func<TSource, TResult> selector); } }" As you can see, the param "enumerable" is prefixed by the "this" keyword, this tells Roslyn to treat "enumerable" as if it was the special implicit "this" operator in instance methods: "using System.Linq; namespace MyNS { public sealed class MyCLS { public static readonly List<int> MyInts = new List<int>() { 0, 5, 12, 56, 9 }; public int RNG = 42; public IEnumerable<int> ExpandInts() { return MyInts.Select(@int => @int * this.RNG); } } }" As you can see in the above to quoted blocks (SOMEONE TELL ME HOW TO DO CODE BLOCKS!), I'm using the extension method "Select" defined in "Enumerable" as if it was defined on the interface "IEnumerable<T>" when it's actually not. (List<T> implements IList<T> which inherits ICollection<T> which inherits IEnumerable<T>) The only requirement is that both the static class AND the method is visible to your code (with public being visible to everyone) Here's the official docs: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and...

On 2021-06-20 7:48 p.m., Steven D'Aprano wrote:
The trick to extension methods is that they're only available when you explicitly use them. In other words, a module/package can't force them on you. The problem with "monkey-patching" is that it *does* get forced on you. With the way Python works, your only option to implement extension methods is to monkey-patch __getattr__. An alternative would be to have a module-level __getattr__, which by default resolves to the object's own attributes, but can be replaced by the module to provide proper (module-local) extension methods functionality. Thus, any module-local extension methods would also take priority over subclasses' attributes, without conflicting with them. (It would also slow everything down af, but that's beside the point.)

On 2021-06-21 12:26 p.m., Stephen J. Turnbull wrote:
Monkey-patching: ```py mod1.py import foo foo.Bar.monkeymethod = ... ``` ```py mod2.py import foo foo.Bar.monkeymethod = ... ``` "Extension methods": ```py mod1.py import foo def __getattr__(o, attr): if isinstance(o, foo.Bar) and attr == "monkeymethod": return ... return getattr(o, attr) ``` ```py mod2.py import foo def __getattr__(o, attr): if isinstance(o, foo.Bar) and attr == "monkeymethod": return ... return getattr(o, attr) ``` Note how the former changes foo.Bar, whereas the latter only changes the module's own __getattr__. You can't have conflicts with the latter. (Also note that this "module's own __getattr__" doesn't provide extension methods by itself, but can be used as a mechanism to implement extension methods.)

On Tue, Jun 22, 2021 at 1:44 AM Soni L. <fakedme+py@gmail.com> wrote:
So what you're saying is that, in effect, every attribute lookup has to first ask the object itself, and then ask the module? Which module? The one that the code was compiled in? The one that is currently running? Both? And how is this better than just using a plain ordinary function? Not everything has to be a method. ChrisA

On Tue, Jun 22, 2021 at 01:49:56AM +1000, Chris Angelico wrote:
Mu. https://en.wikipedia.org/wiki/Mu_(negative)#%22Unasking%22_the_question We don't have an implementation yet, so it is too early to worry about precisely where you look up the extension methods, except that it is opt-in (so by default, there's no additional cost involved) and there must be *some* sort of registry *somewhere* that handles the mapping of extension methods to classes. We certainly don't want this to slow down *every method call*, but if it only slowed down method lookups a little bit when you actually used the feature, that might be acceptable. We already make use of lots of features which are slow as continental drift compared to C, because they add power to the language and are *fast enough*. E.g. name lookups are resolved at runtime, not compile-time; dynamic attribute lookups using gettattribute and getattr dunders; virtual subclasses; generic functions (functools.singledispatch); descriptors.
And how is this better than just using a plain ordinary function? Not everything has to be a method.
You get method syntax, obj.method, which is nice but not essential. When a method is called from an instance, you know that the first parameter `self` has got to be the correct type, no type-checking is required. That's good. And the same would apply to extension methods. You get bound methods as first class values, which is useful. You get inheritance, which is powerful. And you get encapsulation, which is important. I think this is a Blub moment. We don't think it's useful because we have functions, and we're not Java, so "not everything needs to be a method". Sure, but methods are useful, and they do bring benefits that top-level functions don't have. (And vice versa of course.) We have staticmethod that allows us to write a "function" (-ish) but get the benefits of inheritance, encapsulation, and method syntax. This would be similar. We acknowledge that there are benefits to monkey-patching. But we can't monkey-patch builtins and we are (rightly) suspicious of those who use monkey-patching in production. And this is good. But this would give us the benefits of monkey-patching without the disadvantages. *If* we can agree on semantics and come up with a reasonable efficient implementation that doesn't slow down every method call. -- Steve

On Tue, Jun 22, 2021 at 3:43 AM Steven D'Aprano <steve@pearwood.info> wrote:
I'm actually not concerned so much with the performance as the confusion. What exactly does the registration apply to? And suppose you have a series of extension methods that you want to make use of in several modules in your project, how can you refactor a bunch of method registration calls so you can apply them equally in multiple modules? We don't need an implementation yet - but we need clear semantics.
True, all true, but considering that this is *not* actually part of the class, some of that doesn't really apply. For instance, is it really encapsulation? What does that word even mean when you're injecting methods in from the outside?
And that's a very very big "if". Monkey-patching can be used for unittest mocking, but that won't work here. Monkey-patching can be used to fix bugs in someone else's code, but that only works here if *your* code is in a single module, or you reapply the monkey-patch in every module. I'm really not seeing a lot of value in the proposal. Let's completely ignore the performance cost for the moment and just try to figure out semantics, with it being actually useful and not unwieldy. ChrisA

On Tue, Jun 22, 2021 at 03:56:00AM +1000, Chris Angelico wrote:
I'm actually not concerned so much with the performance as the confusion. What exactly does the registration apply to?
Good question. Extension methods have four steps: - you write a method; - declare which class it extends; - the caller declares that they want to use extensions; - and they get looked up at runtime (because we can't do static lookups). The first two can go together. I might write a module "spam.py". Borrowing a mix of Kotlin and/or C# syntax, maybe I write: def list.head(self, arg): ... def list.tail(self, arg): ... or maybe we have a decorator: @extends(list) def head(self, arg): ... The third step happens at the caller site. Using the C# keyword, you might write this in your module "stuff.py": uses spam or maybe there's a way to do it with the import keyword: # could be confused for `from spam import extensions`? import extensions from spam from functools import extension_methods import spam extension_methods.load_from(spam) whatever it takes. Depends on how much of this needs to be baked into the interpreter. Fourth step is that you go ahead and use lists as normal. Whether you use getattr or dot syntax, any extension methods defined in spam.py will show up, as if they were actual list methods. hasattr([], 'head') # returns True list.tail # returns the spam.tail function object (unbound method) They're not monkey-patched: other modules don't see that.
I put the extension modules in one library. That may not literally require me to put their definitions in a single .py file, I should be able to use a package and import extension methods from modules the same as any other object. But for ease of use for the caller, I probably want to make all my related extension methods usable from a single place. Then you, the caller, import/use them from each of your modules where you want to use them: # stuff.py uses spam # things.py uses spam And in modules where you don't want to use them, you just don't use them. [...]
Sure it's encapsulation. We can already do this with non-builtin classes: class SpammySpam: def spam(self, arg): ... from another_module import eggy_method def aardvarks(self, foo, bar): ... SpammySpam.aardvarks = aardvarks The fact that two of those methods have source code that wasn't indented under the class statement is neither here nor there. Even the fact that eggy_method was defined in another module is irrelevant. What matters is that once I've put the class together, all three methods are fully encapsulated into the SpammySpam class, and other classes can define different methods with the same name. Encapsulation is less about where you write the source code, and more about the fact that I can have SpammySpam().spam and Advertising().spam without the two spam methods stomping on each other. [...]
LINQ is a pretty major part of the C# ecosystem. I think that proves the value of extension methods :-) I know we're not really comparing apples with apples, Python's trade-offs are not the same as C#'s trade-offs. But Ruby is a dynamic language like Python, and they use monkey-patching all the time, proving the value of being able to extend classes without subclassing them. Extension methods let us extend classes without the downsides of monkey-patching. Extension methods are completely opt-in while monkey-patching is mandatory for everyone. If we could only have one, extension methods would clearly be the safer choice. We don't make heavy use of monkey-patching, not because it isn't a useful technique, but because: - unlike Ruby, we can't extend builtins without subclassing; - we're very aware that monkey-patching is a massively powerful technique with huge foot-gun potential; - and most of all, the Python community is a hell of a lot more conservative than Ruby. Even basic techniques intentionally added to the language (like being able to attach attributes onto function objects) are often looked at as if they were the worst kind of obfuscated self-modifying code. Even when those same techniques are used in the stdlib people are still reluctant to use it. As a community, we're like cats: anything new and different scares us, even if its actually been used for 30 years. We're a risk-adverse community. -- Steve

On 2021-06-21 9:39 p.m., Steven D'Aprano wrote:
Python is a dynamic language. Maybe you're using hasattr/getattr to forward something from A to B. If "other modules don't see that" then this must work as if there were no extension methods in place. So you actually wouldn't want the local load_attr override to apply to those. If you did... well, just call the override directly. If the override was called __opcode_load_attr_impl__ you'd just call __opcode_load_attr_impl__ directly instead of going through getattr. There needs to be an escape hatch for this. Or you *could* have getattr be special (called by load_attr) and overridable, and builtins.getattr be the escape hatch, but nobody would like that.

I'm sorry Soni, I don't understand what you are arguing here. See below. On Mon, Jun 21, 2021 at 10:09:17PM -0300, Soni L. wrote:
What's "forward something from A to B" mean? What are A and B? If "this" (method lookups) "must work as if there were no extension methods in place" then extension methods are a no-op and are pointless. You write an extension method, register it as applying to a type, the caller opts-in to use it, and then... nothing happens, because it "must work as if there were no extension methods in place". Surely that isn't what you actually want to happen. But if not, I have no idea what you mean. The whole point of extension methods is that once the caller opts in to use them, method look ups (and that includes hasattr and getattr) must work as if the extension methods **are in place**. The must be no semantic difference between: obj.method(arg) and getattr(obj, 'method')(arg) regardless of whether `method` is a regular method or an extension method.
I have no idea what that means. What is "the local load_attr override"?
As a general rule, you should not be calling dunders directly. You seem to have missed the point that extension methods are intended as a mechanism to **extend a type** by giving it new methods on an opt-in basis. I want to call them "virtual methods" except that would add confusion regarding virtual subclasses and ABCs etc. Maybe you need to read the Kotlin docs: https://kotlinlang.org/docs/extensions.html and the C# docs: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and... Wikipedia also has a broad overview from a language-agnostic perspective: https://en.wikipedia.org/wiki/Extension_method Here's an example in TypeScript and Javascript: https://putridparrot.com/blog/extension-methods-in-typescript/ In particular note these comments: # Kotlin "Such functions are available for calling in the usual way as if they were methods of the original class." # C# "Extension methods are only in scope when you explicitly import the namespace into your source code with a using directive." Both C# and Kotlin are statically typed languages, and Python is not, but we ought to aim to minimise the differences in semantics. Aside from extension methods being resolved at runtime instead of at compile time, the behaviour ought to be as close as possible. Just as single dispatch in Python is resolved dynamically, but aims to behave as close as possible to single dispatch in statically typed languages. Another important quote: "Because extension methods are called by using instance method syntax, no special knowledge is required to use them from client code. To enable extension methods for a particular type, just add a `using` directive for the namespace in which the methods are defined." "No special knowledge is required" implies that, aside from the opt-in step itself, extension methods must behave precisely the same as regular methods. That means they will be accessible as bound methods on the instance: obj.method and unbound methods (functions) on the type: type(obj).method and using dynamic lookup: getattr(obj, 'method') and they will fully participate in inheritance heirarchies if you have opted in to use them.
There needs to be an escape hatch for this.
The escape hatch is to *not* opt-in to the extension method. If the caller doesn't opt-in, they don't get the extension methods. That is the critical difference between extension methods and monkey- patching the type. Monkey-patching effects everyone. Extension methods have to be opt-in.
Huh? Unless you have shadowed getattr with a module-level function, getattr *is* builtins.getattr. -- Steve

On Tue, Jun 22, 2021 at 8:01 PM Steven D'Aprano <steve@pearwood.info> wrote:
And this is a problem. How is getattr defined? Is it counted as being in the current module? If it is, then has getattr magically become part of the module it's called from? Or do ALL lookups depend on where the function was called, rather than where it's defined? If 'method' is an extension method, where exactly is it visible? ChrisA

On Tue, Jun 22, 2021 at 09:12:53PM +1000, Chris Angelico wrote:
If its a problem for getattr, it is a problem for dot syntax, because they are essentially the same thing.
How is getattr defined?
The same as it is defined now, except with some minor tweaks to support extension methods. Do you remember when we introduced `__slots__` (version 2.2 or 2.3, I think?), and added a whole new mechanism to look up ordinary attributes and slot attributes? No, neither do I, because we didn't. We have a single mechanism for looking up attributes, including methods, which works with instances and classes, descriptors and non-descripters, C-level slots and Python `__slots__` and `__dicts__` and `__getattr__` and `__getattribute__`, and I am absolutely positively sure that if Python ever adds a new implementation for attribute lookup, it will still be handled by the same getattr mechanism, which is built into the interpreter.
Is it counted as being in the current module?
`getattr`? No, that's a builtin. You can shadow it or delete it if you want, it's just a public API to the underlying functionality built into the interpreter. Dot syntax won't be affected.
Can you be a bit more precise? I'm not suggesting that we introduce dynamic scoping instead of lexical scoping, if that's what you mean. Attribute lookups already depend on the state of the object at the time the lookup is made. This is just more of the same. class K: pass K.attr # this is an AttributeError K.attr = 'extension' K.attr # this is fine
If 'method' is an extension method, where exactly is it visible?
I believe that TypeScript uses "import" for this, so it would be visible from anywhere that imports it: https://putridparrot.com/blog/extension-methods-in-typescript/ -- Steve

On Tue, Jun 22, 2021 at 9:56 PM Steven D'Aprano <steve@pearwood.info> wrote:
Ahh but that is precisely the problem.
Do those tweaks include reaching back into the module that called it? How magical will it be?
Let me clarify then. We shall assume for the moment that the builtins module does not have any extension methods registered. (I suppose it could, but then you get all the action-at-a-distance of monkey-patching AND the problems of extension methods, so I would hope people don't do this.) This means that the getattr() function, being a perfectly straight-forward function, is not going to see any extension methods. Okay then. # whatever the actual syntax is @extend(list) def in_order(self): return sorted(self) stuff = [1, 5, 2] stuff.in_order() # == [1, 2, 5] getattr(stuff, "in_order")() # AttributeError Does the getattr function see the extension methods? If so, which? If not, how can getattr return the same thing as attribute lookup does? How do you inform getattr of which extension methods it should be looking at? And what about this? f = functools.partial(getattr, stuff) f("in_order") NOW which extension methods should apply? Those registered here? Those registered in the builtins? Those registered in functools? Yes, monkey-patching *is* cleaner, because the object is the same object no matter how you look it up. (Oh, and another wrinkle, although a small one: Code objects would need to keep track of their modules. Currently functions do, but code objects don't. But that seems unlikely to introduce further complications.) ChrisA

On Tue, Jun 22, 2021 at 10:25:33PM +1000, Chris Angelico wrote:
Is it? Don't be shy. Tell us what the problem is and why its a problem.
I thought you agreed that we didn't need to discuss implementation until we had decided on the desired semantics? Let's just say it will be a well-defined, totally non-magical implementation (like everything else in Python) that manages to be equally efficient as regular attribute access. A high bar to set. Implementation issues may require us to dial that back a bit, or might even rule out the concept altogether, but let's start off by assuming the best and decide on the semantics first. [...]
Let me clarify then.
Thank you, that would be helpful.
How do you know that the builtins aren't already using extension methods? Let's pretend that you didn't know that CPython's implementation was C rather than C#. Or that C has support for something similar to extension methods. (I daresay you could simulate it, somehow.) Or that we're talking about IronPython, for example, which is implemented in C#. I might tell you that list.sort and list.index are regular methods, and that list.append and list.reverse are extension methods. Short of looking at the source code, there would be absolutely no way for you to tell if I were correct or not. So if builtins used extension methods, that would be indistinguishable from builtins not using extension methods. (To be pedantic: this would only be true if those extension methods were added at interpreter startup, before the interpreter ran any user code. Otherwise you could take a snapshot of `dir(list)` before and after, and inspect the differences.) To be clear, this is distinct from *a user module* using extension methods on a builtin type, which is normal and the point of the exercise. Do I need to explain the difference between the interpreter using extension methods as part of the builtins implementation, and user- written modules ("spam.py") extending builtin classes with extension types? Because they are completely different things.
This means that the getattr() function, being a perfectly straight-forward function, is not going to see any extension methods.
Does getattr see slots (both C-level and Python)? Yes. Does it see attributes in instance and class dicts? Yes. Does it see dynamic attributes that use `__getattr__`? Yes. Does it understand the descriptor protocol? Yes. It does everything else dot notation does. Why wouldn't it see extension methods? (Apart from spite.) The getattr builtin is just a public interface to whatever internal function or functions the interpreter uses to look up attributes.
What reason do you have for thinking that would be how it works? Not a rhetorical question: is that how it works in something like Swift, or Kotlin?
Does the getattr function see the extension methods? If so, which?
Yes, and the same ones you would see if you used dot syntax.
If not, how can getattr return the same thing as attribute lookup does?
I think you've just answered your own question. getattr has to return the same thing as attribute lookup, because if it didn't, it wouldn't be returning the same thing as attribute lookup, which is getattr's reason to exist.
How do you inform getattr of which extension methods it should be looking at?
You don't. You inform the interpreter tha you are opting in to use extension methods on a type, the interpreter does whatever it needs to do to make it work (implementation), and then it Just Works™.
partial is just a wrapper around its function argument, so that should behave *exactly* the same as `getattr(stuff, 'in_order')`.
Yes, monkey-patching *is* cleaner, because the object is the same object no matter how you look it up.
Oh for heaven's sake, I'm not proposing changes to Python's object identity model! Please don't invent bogus objections that have no basis in the proposal. The id() function and `is` operator will work exactly the same as they do now. Classes with extension methods remain the same object. The only difference is in attribute lookups.
(Oh, and another wrinkle, although a small one: Code objects would need to keep track of their modules.
Would they? I'm not seeing the connection between code objects used by functions and attribute lookups. Perhaps they would, but it's not clear to me what implementation you are thinking of when you make this statement. `getattr` doesn't even have a `__code__` attribute, and neither do partial objects. -- Steve

On Wed, Jun 23, 2021 at 11:40 AM Steven D'Aprano <steve@pearwood.info> wrote:
That's exactly what the rest of the post is about.
Semantics are *exactly* what I'm talking about.
Okay. Lemme give it to you *even more clearly* since the previous example didn't satisfy. # file1.py @extend(list) def in_order(self): return sorted(self) def frob(stuff): return stuff.in_order() # file2.py from file1 import frob thing = [1, 5, 2] frob(thing) # == [1, 2, 5] def otherfrob(stuff): return stuff.in_order() otherfrob(thing) # AttributeError Am I correct so far? The function imported from file1 has the extension method, the code in file2 does not. That's the entire point here, right? Okay. Now, what if getattr is brought into the mix? # file3.py @extend(list) def in_order(self): return sorted(self) def fetch1(stuff, attr): if attr == "in_order": return stuff.in_order if attr == "unordered": return stuff.unordered return getattr(stuff, attr) def fetch2(stuff, attr): return getattr(stuff, attr) # file4.py from file3 import fetch1, fetch2 import random @extend(list) def unordered(self): return random.shuffle(self[:]) def fetch3(stuff, attr): if attr == "in_order": return stuff.in_order if attr == "unordered": return stuff.unordered return getattr(stuff, attr) def fetch4(stuff, attr): return getattr(stuff, attr) thing = [1, 5, 2] fetch1(thing, "in_order")() fetch2(thing, "in_order")() fetch3(thing, "in_order")() fetch4(thing, "in_order")() fetch1(thing, "unordered")() fetch2(thing, "unordered")() fetch3(thing, "unordered")() fetch4(thing, "unordered")() Okay. *NOW* which ones raise AttributeError, and which ones give the extension method? What exactly are the semantics of getattr? Is it a magical function that can reach back into the module that called it, or is it actually a function of its own? And if getattr is supposed to reach back into the other module, why shouldn't other functions be able to? Please explain exactly what the semantics of getattr are, and exactly which modules it is supposed to be able to see. Remember, it is not a compiler construct or an operator. It is a function, and it lives in its own module (the builtins).
Not a rhetorical question: is that how it works in something like Swift, or Kotlin?
I have no idea. I'm just asking how you intend it to work in Python. If you want to cite other languages, go ahead, but I'm not assuming that they already have the solution, because they are different languages. Also not a rhetorical question: Is their getattr equivalent actually an operator or compiler construct, rather than being a regular function? Because if it is, then the entire problem doesn't exist.
So if it behaves exactly the same way that getattr would, then is it exactly the same as fetch2 and fetch4? If not, how is it different? What about other functions implemented in C? If I write a C module that calls PyObject_GetAttr, does it behave as if dot notation were used in the module that called me, or does it use my module's extension methods? You are handwaving getattr a crazy amount of magic here that basically amounts to "do what I want".
You know what I mean. Stop being obtuse. The object has notably different behaviour depending on where you are when you look at it. In every other way in Python, an object is what it is regardless of who's asking - but now this is proposing changing that.
Attribute lookups are done by bytecode, which lives in code objects. You can execute a code object without an associated function, and you can have functions in different modules associated with the same code object. When you run that bytecode, which set of extension methods would it look up? The sanest approach I can think of is that the code object would remember which module it was created in (which is broadly the same as the way PEP 479 does things - although since that's a binary state, it simply sets one flag on the code object).
`getattr` doesn't even have a `__code__` attribute, and neither do partial objects.
Builtin functions don't have bytecode, they have C code, but they'd need an equivalent. Partial objects have a func attribute, which would be where you'd go looking for the code (either a code object or C code). None of this changes the fact that code objects still would need to know their modules. ChrisA

On Wed, Jun 23, 2021 at 03:47:05PM +1000, Chris Angelico wrote:
Correct so far.
Okay. Now, what if getattr is brought into the mix?
To a first approximation (ignoring shadowing) every dot lookup can be replaced with getattr and vice versa: obj.name <--> getattr(obj, 'name') A simple source code transformation could handle that, and the behaviour of the code should be the same. Extension methods shouldn't change that.
In file3's scope, there is no list.unordered method, so any call like some_list.unordered getattr(some_list, 'unordered') will fail, regardless of which list some_list is, or where it was created. That implies that: fetch1(some_list, 'unordered') fetch2(some_list, 'unordered') will also fail. It doesn't matter who is calling the functions, or what module they are called from. What matters is the context where the attribute lookup occurs, which in fetch1 and fetch2 is the file3 scope.
# file4.py from file3 import fetch1, fetch2
Doesn't matter that fetch1 and fetch2 are imported into file4. They are still executed in the global scope of file3. If they called `globals()`, they would see file3's globals, not file4's. Same thing for extension methods.
I think that's going to always return None :-)
In the scope of file4, there is no list method "in_order", but there is a list method "unordered". So some_list.in_order getattr(some_list, 'in_order') will fail. That implies that: fetch3(some_list, 'unordered') fetch4(some_list, 'unordered') will also fail. It doesn't matter who is calling the functions, or what module they are called from. What matters is the context where the attribute lookup occurs, which in fetch3 and fetch4 is the file4 scope. (By the way, I think that your example here is about ten times more obfuscated than it need be, because of the use of generic, uninformative names with numbers.)
Look at the execution context. fetch1(thing, "in_order") and fetch2(thing, "in_order") execute in the scope of file3, where lists have an in_order extension method. It doesn't matter that they are called from file4: the body of the fetchN functions, where the attribute access takes place, executes where the global scope is file3 and hence the extension method "in_order" is found and returned. For the same reason, both fetch1(thing, "unordered") and fetch2(thing, "unordered") will fail. It doesn't matter that they are called from file4: their execution context is their global scope, file3, and just as they see file3's globals, not the callers, they will see file3's extension methods. (I say "the module's extension methods", not necessarily to imply that the extension methods are somehow attached to the module, but only that there is some sort of registry that says, in effect, "if your execution context is module X, then these extension methods are in use".) Similarly, the body of fetch3 and fetch4 execute in the execution context of file4, where list has been extended with an unordered method. So fetch3(thing, "unordered") and fetch4(thing, "unordered") both return that unordered method. For the same reason (the execution context), fetch3(thing, "in_order") and fetch4(thing, "in_order") both fail.
What exactly are the semantics of getattr?
Oh gods, I don't know the exact semantics of attribute look ups now! Something like this, I think: obj.attr (same as getattr(obj, 'attr'): if type(obj).__dict__['attr'] exists and is a data descriptor: # data descriptors are the highest priority return type(obj).__dict__['attr'].__get__() elif obj.__dict__ exists and obj.__dict__['attr'] exists: # followed by instance attributes in the instance dict return obj.__dict__['attr'] elif type(obj) defines __slots__ and there is an 'attr' slot: # then instance attributes in slots if the slot is filled: return contents of slot 'attr' else: raise AttributeError elif type(obj).__dict__['attr'] exists: if it is a non-data descriptor: return type(obj).__dict__['attr'].__get__() else: return type(obj).__dict__['attr'] elif type(obj) defines a __getattr__ method: return type(obj).__getattr__(obj) else: # search the superclass hierarchy ... # if we get all the way to the end raise AttributeError I've left out `__getattribute__`, I *think* that gets called right at the beginning. Also the result of calling `__getattr__` is checked for descriptor protocol too. And the look ups on classes are slightly different. Also when looking up on classes, metaclasses may get involved. And super() defines its own `__getattribute__` to customize the lookups. (As other objects may do too.) And some of the fine details may be wrong. But, overall, the "big picture" should be more or less correct: 1. check for data descriptors; 2. check for instance attributes (dict or slot); 3. check for non-data descriptors and class attributes; 4. call __getattr__ if it exists; 5. search the inheritance hierarchy; 6. raise AttributeError if none of the earlier steps matched. If we follow C# semantics, extension methods would be checked after step 4 and before step 5: if the execution context is using extensions for this class: and 'attr' is an extension method, return that method
You seem to think that getattr being a function makes a difference. Why? Aside from the possibility that it might be shadowed or deleted from builtins, can you give me any examples where `obj.attr` and `getattr(obj. 'attr')` behave differently? Even *one* example? Okay, this is Python. You could write a class with a `__getattr__` or `__getattribute__` method that inspected the call chain and did something different if it spotted a function called "getattr". Congratulations, you are very smart and Python is very dynamic. You might even write a __getattr__ that, oh, I don't know, returned a method if the execution context had opted in to a system that provided extra methods to your class. But I digress. But apart from custom-made classes that deliberately play silly buggers if they see that getattr is involved, can you give an example of where it behaves differently to dot syntax?
I really don't know why you think getattr being a function makes any difference here. It's a builtin function, written in C, and can and does call the same internal C routines used by dot notation.
Okay, let's look at the partial object: >>> import functools >>> f = functools.partial(getattr, [10, 20]) >>> f('index')(20) 1 Partial objects like f don't seem to have anything like a __globals__ attribute that allow me to tell what the execution context would be. I *think* that for Python functions (def or lambda) they just inherit the execution context from the function. For builtins, I'm not sure. I presume their execution context will be the current scope. Right now, I've already spent multiple hours on these posts, and I have more important things to do now than argue about the minutia of partial's behaviour. But if you wanted to do an experiment, you could do something like comparing the behaviour of: # module A.py f = lambda: globals() g = partial(globals) # module B.py from A import f, g f() g() and see whether f and g behave identically. I expect that f would return A's globals regardless of where it was called from, but I'm not sure what g would do. It might very well return the globals of the calling site. In any case, with respect to getattr, the principle would be the same: the execution context defines whether the partial object sees the extension methods or not. If the execution context is A, and A has opted in to use extension methods, then it will see extension methods. If the context is B, and B hasn't opted in, then it won't.
That depends. If you write a C module that calls PyObject_GetAttr right now, is that *exactly* the same as dot notation in pure-Python code? The documentation is terse: https://docs.python.org/3.8/c-api/object.html#c.PyObject_GetAttr but if it is correct that it is precisely equivalent to dot syntax, then the same rules will apply. Has the current module opted in? If so, then does the class have an extension method of the requested name? Same applies to code objects evaluated without a function, or whatever other exotic corner cases you think of. Whatever you think of, the answer will always be the same: - if the execution context is a module that has opted to use extension methods, then attribute access will see extension methods; - if not, then it won't. If you think of a scenario where you are executing code where there is no module scope at all, and all global lookups fail, then "no module" cannot opt in to use extension methods and so the code won't see them. If you can think of a scenario where you are executing code where there are multiple module scopes that fight for supremacy using their two weapons of fear, surprise and a fanatical devotion to the Pope, then the winner will determine the result. *wink* -- Steve

On 2021-06-23 10:21 a.m., Steven D'Aprano wrote:
But if getattr is part of the builtins module, written in C, and the builtins module is calling PyObject_GetAttr, and PyObject_GetAttr is exactly the same as the dot notation... Then getattr is exactly the same as the dot notation, **in the builtins module**! The builtins module doesn't use any extension methods, as it is written in C. As such getattr(foo, "bar") MUST NOT produce the same result as foo.bar if extension methods are at play! (You're still missing the point of extension methods. Do check out our other reply.)

On Wed, Jun 23, 2021 at 11:25 PM Steven D'Aprano <steve@pearwood.info> wrote:
Alright. In that case, getattr() has stopped being a function, and is now a magical construct of the compiler. What happens if I do this? if random.randrange(2): def getattr(obj, attr): return lambda: "Hello, world" def foo(thing): return getattr(thing, "in_order")() Does it respect extension methods or not? If getattr is a perfectly ordinary function, as it now is, then it should be perfectly acceptable to shadow it. It should also be perfectly acceptable to use any other way of accessing attributes - for instance, the PyObject_GetAttr() function in C. Why should getattr() become magical?
Oops, my bad :) Not that it changes anything, given that we care more about whether they trigger AttributeError than what they actually do. (Chomp the rest of the discussion, since that was all based on the assumption that getattr was a normal function.)
That is exactly what's weird about it. Instead of looking up the name getattr and then calling a perfectly ordinary function, now it has to be a magical construct of the compiler, handled right there. It is, in fact, impossible to craft equivalent semantics in a third-party function. Currently, getattr() can be defined in C on top of the C API function PyObject_GetAttr, which looks solely at the object and not the execution context. By your proposal, getattr() can only be compiler magic.
That is *precisely* the possibility. That is exactly why it is magical by your definition, and nonmagical by the current definition. At the moment, getattr() is just a function, hasattr() is just a function. I can do things like this: def ga(obj, attr): return getattr(obj, attr) Or this: ga = getattr Or this: PyObject *my_getattr(PyObject *obj, PyObject *attr) {return PyObject_GetAttr(obj, attr);} But if it's possible to do a source code transformation from getattr(obj, "attr") to obj.attr, then it is no longer possible to do *ANY* of this. You can't have an alias for getattr, you can't have a wrapper around it, you can't write your own version of it. In some languages, this is acceptable and unsurprising, because the getattr-like feature is actually an operator. (For instance, JavaScript fundamentally defines obj.attr as being equivalent to obj["attr"], so if you want dynamic lookups, you just use square brackets.) In Python, that is simply not the case. Are you proposing to break backward compatibility and all consistency just for the sake of this?
But if it's just a builtin function, then how is it going to know the execution context it's supposed to look up attributes in? If getattr(obj, "attr") can be implemented by a third party, show me how you would write the function such that it knows which extension methods to look for. You keep coming back to this assumption that it has to be fundamentally equivalent to obj.attr, but that's the exact problem - there is no way to define a function that can know the caller's context (barring shenanigans with sys._getframe), so it has to be compiler magic instead, which means it is *not a function any more*.
Do you see the problem, then? The partial object has to somehow pass along the execution context. Otherwise, functools.partial(getattr, obj)("attr") won't behave identically to obj.attr.
Correct on both counts - f() naturally has to return the globals from where it is, and g() uses the calling site. Whichever way you do it, somewhere, you're going to have a disconnect between obj.attr and the various dynamic ways of looking it up. It is going to happen. So why are you fighting so hard for getattr() to become magical in this way?
The "current module", logically, would be the extension module.
Multiple scopes can definitely be plausible, but there'll just have to be some definition for which one wins. My guess would be that the function object wins, but if not, the code object should know its own context, and if not that, then there is a null context with no extension methods. But that's nothing more than a guess, and there's no rush on pinning that part down precisely. I am wholeheartedly against this proposal if it means that getattr has to become magical. If, however, getattr simply ignores extension methods, and the ONLY thing changed is the way that dot lookup is done, I would be more able to see its value. (Though not so much that I'd actually be using this myself. I don't think it'd benefit any of my current projects. But I can see its potential value for the language.) ChrisA

On Thu, Jun 24, 2021 at 12:17:17AM +1000, Chris Angelico wrote:
On Wed, Jun 23, 2021 at 11:25 PM Steven D'Aprano <steve@pearwood.info> wrote:
How do you come to that conclusion? Did you miss the part where I said "To a first approximation (ignoring shadowing)"? What I'm describing is not some proposed change, it is the status quo. getattr is equivalent to dot notation, as it always has been, all the way back to 1.5 or older. There's no change there. getattr is a function. A regular, plain, ordinary builtin function. You can shadow it, or bind it to another name, or reach into builtins and delete it, just like every other builtin function. But if you don't do any of those things, then it is functionally equivalent to dot lookup.
According to the result of the random number generator, either the lambda will be returned by the shadowed getattr, or the attribute "in_order" will be looked up on obj. Just like today.
If getattr is a perfectly ordinary function, as it now is, then it should be perfectly acceptable to shadow it.
Correct.
Again, correct.
Why should getattr() become magical?
It doesn't.
I don't think that it is impossible to emulate attribute lookup in pure Python code. It's complicated, to be sure, but I'm confident it can be done. Check out the Descriptor How To Guide, which is old but as far as I can tell still pretty accurate in its description of how attributes are looked up.
The only "magic" that is needed is the ability to inspect the call stack to find out the module being called from. CPython provides functions to do that in the inspect library: `inspect.stack` and `inspect.getmodule`. Strictly speaking, they are not portable Python, but any interpreter ought to be able to provide analogous abilities. Do you think that the functions in the gc library are "compiler magic"? It would be next to impossible to emulate them from pure Python in an interpeter-independent fashion. Some interpreters don't even have reference counts. How about locals()? That too has a privileged implementation, capable of doing things likely impossible from pure, implementation-independent Python code. Its still a plain old regular builtin function that can be shadowed, renamed and deleted.
And it will remain the possibility.
That is exactly why it is magical by your definition
It really won't.
In the absence of any shadowing or monkey-patching of builtins. -- Steve

Oh this is a long one. Hypothetically, let's say you have a proxy object: class Foo: def __getattribute__(self, thing): return getattr(super().__getattribute__(self, "proxied"), thing) Should this really include extension methods into it by default? This is clearly wrong. The local override for the LOAD_ATTR opcode should NOT apply to proxy methods except where explicitly requested. Also sometimes you are supposed to call the dunder directly, like in the above example. It's not *bad* to do it if you know what you're doing. The point is that the caller using your proxy object should opt-in to the extension methods, rather than break with no way to opt-out of them. Your extension methods shouldn't propagate to proxy objects. To go even further, should all your class definitions that happen to extend a class with in-scope extension methods automatically gain those extension methods? Because with actual extension methods, that doesn't happen. You can have class MyList(list): pass and other callers would not get MyList.flatten even with you being able to use MyList.flatten locally. Extension methods are more like Rust traits than inheritance-based OOP. Also note that they use instance method syntax, but no other. That is they apply to LOAD_ATTR opcodes but should not apply to getattr! (Indeed, reflection in C#/Kotlin doesn't see the extension methods!) On 2021-06-22 6:57 a.m., Steven D'Aprano wrote:

On Tue, Jun 22, 2021 at 08:44:56AM -0300, Soni L. wrote:
By default? Absolutely not. Extension methods are opt-in.
This is clearly wrong.
What is clearly wrong? Your question? A "yes" answer? A "no" answer? Your proxy object? Soni, and Chris, you seem to be responding as if extension methods are clearly, obviously and self-evidently a stupid idea. Let me remind you that at least ten languages (C#, Java, Typescript, Oxygene, Ruby, Smalltalk, Kotlin, Dart, VB.NET and Swift) support it. Whatever the pros and cons of the technique, it is not self-evidently wrong or stupid.
The local override for the LOAD_ATTR opcode should NOT apply to proxy methods except where explicitly requested.
Why not? Let me ask you this: - should proxy objects really include `__slots__` by default? - should they really include dynamic attributes generated by `__getattr__`? - should they include attributes in the inheritence hierarchy? - why should extension methods be any different? Let's step back from extension methods and consider a similar technique, the dreaded monkey-patch. If I extend a class by monkey-patching it with a new method: import library library.Klass.method = patch_method would you expect that (by default) the patched method are invisible to proxies of Klass? Would you expect there to be a way to "opt-out" of proxying that method? I hope that your answers are "No, and no", because if either answer is "yes", you will be very disappointed in Python. Why should extension methods be different from any other method? Let's go through the list of methods which are all treated the same: - methods defined on the class; - methods defined on a superclass or mixin; - methods added onto the instance; - methods created dynamically by `__getattr__`. (Did I miss any?) And the list of those which are handled differently, with ways to opt-out of seeing them: - ... um... er... Have I missed any?
The point is that the caller using your proxy object should opt-in to the extension methods, rather than break with no way to opt-out of them.
You opt-out by not opting in.
Your extension methods shouldn't propagate to proxy objects.
Fundamentally, your proxy object is just doing attribute lookups on another object. If you have a proxy to an instance `obj`, there should be no difference in behaviour between extension methods and regular methods. If `obj.method` succeeds, so should `proxy.method`, because that's what proxies do. The origin of obj.method should not make any difference. I'm sorry to have to keep harping on this, but it doesn't matter to the proxy whether the method exists in the instance `__dict__`, or the class `__dict__`, or `__slots__`, or a superclass, or is dynamically generated by `__getattr__`. A method is a method. Extension methods are methods.
We might want to follow the state of the art here, assuming there was consensus in other languages about inheriting extension methods. But I would expect this behaviour: # --- extensions.py library --- @extends(list) def flatten(self): ... # --- module A.py --- uses extensions # opt-in to use the extension method class MyListA(list): pass MyListA.flatten # inherits from list # --- module B.py --- class MyListB(list): pass MyListB.flatten # raises AttributeError However, there may be factors I haven't considered.
I understand that Rust doesn't support inheritance at all, and that Rust traits are more like what everyone else calls "interfaces".
Okay, that's a good data point. The question is, why doesn't reflection see the extension methods? That will help us decide whether that's a limitation of reflection in those languages, or a deliberate design feature we should follow. A brief search suggests that people using C# do want to access extension methods via reflection, and that there are ways to do so: https://duckduckgo.com/?q=c%23+invoke+extension+method+via+reflection -- Steve

On Wed, Jun 23, 2021 at 6:25 PM Steven D'Aprano <steve@pearwood.info> wrote:
It's not self-evidently wrong or stupid. But the semantics, as given, don't make sense. Before this could ever become part of the language, it will need some VERY well-defined semantics. (Preferably, semantics that don't depend on definitions involving CPython bytecode, although I'm fine with it being described like that for the time being. But ultimately, other Pythons will have to be able to match the semantics.) You have been saying certain things as if they are self-evidently right, without any justification or explanation.
They are different because they are *context-sensitive*. Every other example you have given is attached to the object itself. The object defines whether it has __slots__, __dict__, a __getattr__ method, a __getattribute__ method, and superclasses. The object defines, in those ways, which attributes can be looked up, and it doesn't matter how you ask the question, you'll get the same answer. Calling getattr(obj, "thing") is the same as obj.thing is the same as PyObject_GetAttr(ptr_to_obj, ptr_to_string_thing) is the same as any other way you would look it up. Extension methods change that. Now it depends on which module you are in. That means you're either going to have to forfeit these consistencies, or they are going to need to figure out WHICH module you are working with. The simplest definition is this: Extension methods apply *only* to dot notation here in the current module. Every piece of code compiled in this module will look up dotted attributes using extensions active in this module. (In CPython terms, that affects the behaviour of LOAD_ATTR only, and would mean that the code object retains a reference to that module.) That's pretty reasonable. But to accept this simple definition, you *must* forfeit the parallel with getattr(), since getattr() is defined in the builtins, NOT in your module. Yet you assert that, self-evidently, getattr(obj, "thing") MUST be the same as obj.thing, no matter what.
For my part, I absolutely agree with you - the proxy should see it. But that's because the *object*, not the module, is making that decision.
Yes, all things that are defined by the object, regardless of its context.
Nope. Python currently has a grand total of zero ways to have attributes whose existence depends on the module of the caller. (Barring shenanigans with __getattr__ and sys._getframe. Or ctypes. I think we can all agree that that sort of thing doesn't count.)
By definition, extension methods are methods in one module, and not methods in another module.
Yes, I'd definitely like to know this too.
The answers appear to be bypassing the extension method and going for the concrete function that underlies it. That seems perfectly reasonable, but it's basically an acknowledgement that extension methods don't show up in these kinds of ways. So... is it really so self-evident that getattr(obj, "thing") HAS to be the same as obj.thing ? ChrisA

On 2021-06-23 5:21 a.m., Steven D'Aprano wrote:
We're saying including local extension methods into the proxy object's attribute lookup is wrong.
That's funny because we (Soni) have been arguing about and pushing for a specific implementation of them. We're not opposed to them, quite the opposite we even have an implementation we'd like to see, altho we don't really see much of a use for them ourselves.
Why shouldn't extension methods be different from monkey-patching? If they were to be the same, why call them something different?
Extension methods are functions and are scoped like other functions.
Yes, and where they do so, they have to explicitly re-opt-in to it. And that's a good thing, because it gives you more flexibility - you're not *forced* to shadow any object methods with your extension methods when using reflection, and this can be useful when you wanna e.g. use extension methods for your own benefit and still write a programming language interpreter that integrates with the host language using reflection. The "obvious" thing is that reflection only cares about the object(s), but not the context they're in. If you want to bring in the context, you need to bring it in yourself.

On Tue, Jun 22, 2021 at 10:40 AM Steven D'Aprano <steve@pearwood.info> wrote:
Hmm, that's not what I'd usually understand "encapsulation" to mean. That's what would normally be called "namespacing". "... encapsulation refers to the bundling of data with the methods that operate on that data, or the restricting of direct access to some of an object's components." https://en.wikipedia.org/wiki/Encapsulation_(computer_programming)
I don't think it's safer necessarily. With this proposal, we have the notion that obj.method() can mean two completely different things *at the same time* and *on the same object* depending on how you refactor the code. # file1.py from file2 import func # and apply some extension methods def spamify(obj): print(obj.method()) print(func(obj)) # file2.py def func(obj): return obj.method() Is that really beneficial? All I'm seeing is myriad ways for things to get confusing - just like in the very worst examples of monkey-patching. And yes, I have some experience of monkey-patching in Python, including a situation where I couldn't just "import A; import B", I had to first import a helper for A, then import B, and finally import A, because there were conflicting monkey-patches. But here's the thing: extension methods (by this pattern) would not have solved it, because the entire *point* of the monkey-patch was to fix an incompatibility. So it HAD to apply to a completely different module. That's why, despite its problems, I still think that monkey-patching is the cleaner option. It prevents objects from becoming context-dependent.
And the Ruby community is starting to see the risks of monkey-patching. (There's a quiz floating around the internet - "Ruby or Rails?" - that brings into sharp relief the incredibly far-reaching effects of using Rails. It includes quite a few methods on builtin objects.) So I am absolutely fine with being conservative. We have import hooks and MacroPy. Does anyone use them in production? I certainly don't - not because I can't, but because I won't without a VERY good reason.
I'm not sure why attaching attributes to functions is frowned upon; I'd personally make very good use of this for static variables, if only I could dependably refer to "this_function". But risk-averse is definitely preferable to the alternative. It means that Python is a language that can be learned as a whole, rather than being fragmented into "the NumPy flavour of Python" and "the Flask flavour of Python" and so on, with their own changes to the fabric of the language. So far, I'm not seeing anything in extension methods to make me want to change that stance. ChrisA

On Tue, Jun 22, 2021 at 05:50:48PM +1000, Chris Angelico wrote:
Hmm, that's not what I'd usually understand "encapsulation" to mean. That's what would normally be called "namespacing".
Pfft, who you going to believe, me or some random folx on the internet editing Wikipedia? *wink* Okay, using the Wikipedia/OOP definition still applies. Presumably most extension methods are going to be regular instance methods or class methods, rather than staticmethod. So they will take a `self` (or `cls`) parameter, and presumably most such extension methods will actually act on that self parameter in some way. There is your "bundling of data with the methods that operate on that data", as required :-) The fact that the methods happen to be written in an separate file, and (in some sense) added to the class as extension methods, is neither here nor there. While we *could* write an extension method that totally ignored `self` and instead operated entirely on global variables, most people won't -- and besides, we can already do that with regular methods. class Weird: def method(self, arg): global data, more_data, unbundled_data, extra_data del self # don't need it, don't want it do_stuff_with(data, more_data, unbundled_data, extra_data) So the point is that extension methods are no less object-orientey than regular methods. They ought to behave just like regular methods with respect to encapsulation, namespacing, inheritance etc, modulo any minor and necessary differences. E.g. in C# extension methods can only extend a class, not override an existing method. [...]
Yes, we can write non-obvious code in any language, using all sorts of "confusing" techniques, especially when you do stuff dynamically. class K: def __getattr__(self, attrname): if attrname == 'method': if __name__ == '__main__': raise AttributeError return something() Regarding your example, you're only confused because you haven't take on board the fact that extension methods aren't interpreter-global, just module-global. Because it's new and unfamiliar. But we can do exactly the same thing, right now, with functions instead of methods, and you will find it trivially easy to diagnose the fault: # file1.py from file2 import func # instead of "applying an extension method" from elsewhere, # import a function from elsewhere from extra_functions import g def spamify(obj): print(g(obj)) # works fine print(func(obj)) # fails # file2.py def func(obj): return g(obj) # NameError This example looks easy and not the least bit scary to you because you've been using Python for a while and it has become second nature to you. But you might remember back when you were a n00b, it probably confused you: why doesn't `g(obj)` work when you imported it? How weird and confusing! What do you mean, if I want to use g, I have to import it in each and every module where I want to use it? That's just dumb. Importing g once should make it available EVERYWHERE, right? Been there, done that. You learned about modules and namespaces, and why Python's design is *safer and better* than a single interpreter-global namespace, and now that doesn't confuse you one bit. And if you were using Kotlin, or C#, or Swift, or any one of a number of other languages with extension methods, you would likewise learn that extension methods work in a similar fashion. Why does obj.method raise AttributeError from file2? *Obviously* its because you neglected to "apply the extension method", duh. That's as obvious as neglecting to import something and getting a NameError. Maybe even more obvious, if your IDE or linter knows about extension methods. And its *safer and better* than monkey-patching. We have two people in this thread who know Kotlin and C#, at least one of them is a fan of the technique. Why don't we ask them how often this sort of error is a problem within the Kotlin and C# communities?
Sure. Nobody says that extension methods are a Silver Bullet that cures all programming ills. Some things will need a monkey-patch. Python is great because we have a rich toolbox of tools to choose from. To extend a class with more functionality at runtime, we can: - monkey-patch the class; - subclass it; - single or multiple inheritance; - or a virtual subclass; - or use it as a mixin or a trait (with third-party library support); - use delegation and composition; - or any one of a number of Design Patterns; - add methods onto the instance to override the methods on the class; - swizzling (change the instance's class at runtime to change its behaviour); - just write a function. Have I missed anything? Probably. None of those techniques is a silver bullet, all of them have pros and cons. Not all of the techniques will work under all circumstances. We should use the simplest thing that will work, for whatever definition of "work" we need for that task. Extension methods are just another tool in the tool box, good for some purposes, not so good for others.
It might be a necessary thing under rather usual circumstances, but under the great bulk of circumstances, it is a bad thing. Chris, here you are defending monkey-patching, not just as a necessary evil under some circumstances, but as a "cleaner" option, and then in your very next sentence:
And the Ruby community is starting to see the risks of monkey-patching.
Indeed. -- Steve

On Tue, Jun 22, 2021 at 9:23 PM Steven D'Aprano <steve@pearwood.info> wrote:
Okay, that's fair. Granted. It's not ALL of encapsulation, but it is, to an extent encapsulation. (It is also namespacing, and your justification of it was actually a justification of namespacing; but this is Python, and I think we all agree that namespacing is good!)
Fair point. However, I've worked with a good number of languages that have some notion of object methods, and generally, an object has or doesn't have a method based on what the object *is*, not on who's asking. It's going to make for some extremely confusing results. Is getattr() going to act as part of the builtins module or the module that's calling it? What about hasattr? What about an ABC's instance check, or anything else? How do other languages deal with this? How do they have a getattr-like function? Does it have to be a compiler construct?
Yes. That is exactly right. I am claiming that monkey-patching is, in many MANY cases, a cleaner option than extension methods. And then I am saying that monkey-patching is usually a bad thing. This is not incompatible, and it forms a strong view of my opinion of extension methods. ChrisA

On 2021-06-22 05:14, Chris Angelico wrote:
I agree, and this is the aspect of the proposal that most confuses me. I still can't understand concretely what is being proposed, though, so I'm not sure I even understand it. Can someone clarify? Suppose I have this ***** ### file1.py @extend(list) def len2(self): return len(self)**2 ### file2.py # or whatever I do to say "I want to use extensions to list defined in file1" from file1 extend list def coolness(some_list): return some_list.len2() + 1 my_list = [1, 2, 3] print("My list len2:", my_list.len2()) print("My list coolness:", coolness(my_list)) ### file3.py import file2 other_list = [1, 2, 3, 4] print("Other list len2:", other_list.len2()) print("other list coolness:", file2.coolness(other_list)) print("My list len2 from outside:", file2.my_list.len2()) print("My list coolness from outside:", file2.coolness(file2.my_list)) ***** What exactly is supposed to happen here if I run file3? file2 declares use of file1's extensions. file2 does not. But file3 uses a function in file2 that makes use of such extensions. Who sees the extension? The list object my_list in file2 is the same object accessed as file2.my_list in file3. Likewise coolness and file2.coolness. It is going to be super confusing if calling the same function object with the same list object argument gives different results depending on which file you're in. Likewise it's going to be confusing if the same list object sometimes has a .len2 method and sometimes doesn't. But if it doesn't work that way, then it would seem to mean either every module sees the extensions (even if they didn't opt in), or else my_list in file2 is not the same object as file2.my_list in file3. And that would be even worse. (In this example it may seem okay because you can ask why I would call len2 from file3 if I didn't want to use it. But what if the extension is an override of an existing method? Is that not allowed?) In addition, if there is a difference between my_list and other_list, then that apparently means that the syntax for lists now does something different in the two files. This is maybe the most reasonable approach, since it's at least remotely reminiscent of a __future__ import, which changes syntactic behavior. But what exactly is the difference between the two objects here? Are both objects lists? If they are, then how can they have different methods? If they're not, then what are they? Most __future__ imports don't work like this. Maybe the closest thing is the generator_stop one, but at least that places a flag on the code object to indicate the difference. Would "extended lists" have some kind of magic attribute indicating which extensions they're using? That may have been marginally acceptable in the case of PEP 479, which was essentially a bugfix, and set the attribute on code objects which are an obscure internal data structure. But allowing this kind of thing for "user-facing" objects like lists would create a profusion of different list objects with different behavior depending on some combination of attributes indicating "what extends me" --- or, even worse, create different behavior without any such overt indication of which extensions are in use for a given object. The idea that the file in which code is written would somehow determine this type of runtime behavior seems to me to break my assumption that by knowing an object's identity I should have all the information I need to know about how to use it. Some of the posts earlier in this thread seem to suggest that somehow the module where something was defined (something --- not sure what --- maybe the object with the extended method? maybe the extended method itself?) would somehow get a hook to override attribute access on some objects (again, not sure which objects). That to me is the exact opposite of encapsulation. Encapsulation means the object itself contains all its behavior. If there is some getattr-like hook in some other module somewhere that is lying in wait to override attribute access on a given object "only sometimes" then that's not encapsulation at all. It's almost as bad as the infamous COME FROM statement! Existing mechanisms like __getattribute__ are not parallel at all. When you know an object's identity, you know its MRO, which tells you all you need to know about what __getattribute__ calls might happen. You don't need to know anything about where the object "came from" or what file you're using it in. But it seems with this proposal you would need to know, and that's kind of creepy to me. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On 2021-06-22 3:43 p.m., Brendan Barnwell wrote:
NameError, value, NameError, value, respectively.
It isn't the list object that has the extension method.
Think about it like this, extension methods give you the ability to make imported functions that look like this: foo(bar, baz) look like this instead: bar.foo(baz) That's all there is to them. They're just a lie to change how you read/write the code. Some languages have an whole operator that has a similar function, where something like bar->foo(baz) is sugar for foo(bar, baz). The OP doesn't specify any particular mechanism for extension methods, so e.g. making the dot operator be implemented by a local function in the module, which delegates to the current attribute lookup mechanism by default, would be perfectly acceptable. It's like deprecating the existing dot operator and introducing a completely different one that has nothing to do with attribute lookup!

On 2021-06-22 5:23 p.m., Chris Angelico wrote:
Sure! As long as the new one can call getattr! Let's say the new dot operator looks like this: # file1.py def __dot__(left, right): print(left) print(right) ... foo = [] foo.bar Now, this would actually print the list [] and the string "bar". Then you can just use getattr to get attribute lookup behaviour out of it! def __dot__(left, right): return getattr(left, right) foo = [] foo.bar It would have local scope, similar to uh... locals. Y'know how locals are just sugar for locals()['foo'] and stuff? Yeah.

On Wed, Jun 23, 2021 at 6:41 AM Soni L. <fakedme+py@gmail.com> wrote:
Not really, no, they're not. :) The dictionary returned by locals() isn't actually an implementation detail of local name lookups. Have you put any thought into how you would deal with the problem of recursive __dot__ calls? ChrisA

On 2021-06-22 5:54 p.m., Chris Angelico wrote:
It's... part of the language. Not an implementation detail. The dictionary returned by locals() is an inherent part of local name lookups, isn't it?
Have you put any thought into how you would deal with the problem of recursive __dot__ calls?
Let it recurse! Globals and locals don't go through __dot__, so you can just... use them. In particular, you can always use getattr(), and probably should. Or even set __dot__ to getattr inside it, like so: def __dot__(left, right): __dot__ = getattr foo.bar # same as getattr(foo, "bar") because we set (local) __dot__ to getattr above In languages with lexical scoping (instead of block scoping), the compiler doesn't see things that haven't yet been declared. In those languages, such a __dot__ function would actually inherit the global __dot__ rather than recursing. But as you can see from the above example, it's really not a big deal.

On Wed, Jun 23, 2021 at 8:30 AM Soni L. <fakedme+py@gmail.com> wrote:
No, it's not. Most definitely not. https://docs.python.org/3/library/functions.html#locals
I can't actually pin down what I'm averse to here, but it gives me a really REALLY bad feeling. You're expecting every attribute lookup to now look for a local or global name __dot__ (or, presumably, a nonlocal, class, or builtin), and do whatever that does. That seems like a really effective foot-gun. Have you actually tried designing this into a larger project to see what problems you run into, or is this something you've only considered at this trivial level? ChrisA

On 2021-06-22 7:38 p.m., Chris Angelico wrote:
Ohh. Fair enough, sorry.
1. It's opt-in. 2. It's designed to be used by a hypothetical extension methods module, but without imposing any design constraints on such module. It could return a named function every time a given name is looked up (a la "bind the first argument" operator), or do dynamic dispatch based on types or ABCs (a la proper extension methods). In practice, you don't def your own __dot__, but rather use someone else's "__dot__ builder". If you don't wanna deal with it, just don't use __dot__. It's also useful for the occasional domain-specific language.

On 2021-06-22 13:09, Soni L. wrote:
Okay, if that's the case, then I just think it's a bad idea. :-) We already have a definition for what bar.foo does, and it's totally under the control of the bar object (via the __getattr__/__getattribute__ mechanism). The idea that other things would be able to hook in there does not appeal to me at all. I don't really understand why you would want such a thing, to be honest. I feel it would make code way more difficult to reason about, as it would break locality constraints every which way. Now every time you see`bar.foo` you would have to think about all kinds of other modules that may be hooking in and adding their own complications. What's the point? Mostly the whole benefit of the dot notation is that it specifies a locally constrained relationship between the object and the attribute: you know that bar.foo does what bar decides, and no one else gets any say (unless bar asks for their opinion, e.g. by consulting global variables or whatever). If we want to write foo(bar, baz). . . well, we can just do that! What you're describing would just make existing attribute usages harder to understand while only "adding" something we can already do quite straightforwardly. Imagine a similar proposal for other syntax. Suppose that in any module I could define a function called operator_add and then other modules could "import" this "extension" so that every use of the + operator would somehow hook into this operator_add function. So now every time you do 2 + 2 you might be invoking some extension behavior. In my view that is unambiguously a road to madness, and as far as I can tell the extension mechanism you're proposing is equally ill-advised. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On 2021-06-22 5:34 p.m., Brendan Barnwell wrote:
Imagine if Python didn't have an + operator, but instead an + *infix function*. Thus, every module would automatically include the global def infix +(left, right): ... And indeed, you could say we already have this. Except currently you can't define your own local infix +. But what if you *could*? What if you could just, # file1.py def infix +(left, right): return left << right x = 4 + 4 # file2.py def infix +(left, right): return left ** right x = 4 + 4 # file3.py import file1 import file2 print(file1.x) # 64 print(file2.x) # 256 print(4 + 4) # 8 How does this break locality? Same idea with the dot operator, really. (Some languages don't have operators, but only functions. They let you do just this.)

On 2021-06-22 15:35, Soni L. wrote:
Then that would be bad. All the examples you give seem bad to me. They just make the code more confusing. Python combines various paradigms, but I think one way in which it very smoothly leverage object orientation is by making objects the locus of so much behavior. Operator overloads are defined at the object level, as are "quasi-operator" overloads for things like attribute lookup, iteration, context managers, etc. This means that the semantics of an expression are, by and large, determined by the types of the objects in that expression. What you're describing is basically moving a lot of that out to the module level. Now instead of operator overloads being governed by objects, they'd be governed by the module in which the code appears. The semantics of an expression would be determined not (only) by the types of the objects involved, but also by the module in which the expression textually occurs. There aren't many things in Python that work this way. Future imports are the main one, but those are rare (and rightly so). The import machinery itself provides some possibility for this (as used by stuff like macropy) but is mostly not used for such things (and again rightly so). Beyond that, I just think this kind of thing is a bad idea. Objects naturally cross module boundaries, in that an object may be created in one module and used in many other modules. It is good for an object's behavior to be consistent across modules, so that someone using an object (or readng code that uses an object) can look at the documentation for that object's type and understand how it will work in any context. It is good for code to be understandable in a "bottom up" way in which you understand the parts (the objects, expressions, syntactic structures, etc.) and can combine your understanding of those parts to understand the whole. It is bad for an object to shapeshift and do different things in different contexts. It is bad for code to heavily depend on "top down" information that requires you to know "where you are" (in one module or another) to understand how things work. That increases cognitive burden and makes code more difficult to understand. Personally I'm opposed to anything that moves in that direction. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On Wed, Jun 23, 2021 at 6:49 PM Brendan Barnwell <brenbarn@brenbarn.net> wrote:
Future imports are generally about syntax. They only very seldom affect execution, and when they do, there has to be a mechanic like a flag on the code object (as generator_stop does). In order to maintain compatibility between modules with and without the directive, the execution has to be independent of that. (For example, Py2's "from __future__ import division" changes the bytecode created from the division operator, and barry_as_FLUFL generates the same code for "<>" that normal mode generates for "!=".) There's no fundamental problem with having things defined per-module. You can override the print function for the current module, and other modules aren't affected. I think that per-module operator overloading would be extremely confusing, but mainly because type-free operator overloading is an inherently confusing thing to do. Python allows an object to override operators, and C++ allows you to write a function called "operator+" that takes two arguments, but the types of those arguments is what makes everything make sense. That usually removes the problem of recursion because the implementation of operator+(my_obj, my_obj) is usually going to require addition of simpler types like integers. ChrisA

On Tue, Jun 22, 2021 at 11:43:09AM -0700, Brendan Barnwell wrote:
Right, as far as code executing within the "file2" scope is concerned, lists have a "len2" method.
The call to other_list.len2 would fail with AttributeError, because the current execution scope when the attribute is looked up is file3, not file2. The call to file2.coolness() would succeed, because the function coolness is executed in the scope of file2, which is using the extension method. This is similar to the way name lookups work: # A.py x = 1 def func(): print(x) # This always looks up x in A's scope. Obviously calling func() in its own module will succeed. But if you call func() from another module, it doesn't look up "x" in the caller's module, it still refers back to the module it was defined. Namely A. # B.py x = 9999 from A import func func() # prints 1, not 9999 This is standard Python semantics, and you probably don't even think about it. Extension methods would work similarly: whether the attribute name is visible or not depends on which scope the attribute lookup occurs in, not who is calling it. So if you can understand Python scoping, extension methods will be quite similar. Remember: - global `x` in A.py and global `x` in B.py are different names in different scopes; - functions remember their original global scope and lookup names in that, not the caller's global scope. Attribute lookups involving extension methods would be similar: - whether the extension method is seen or not depends on the which scope the look up occurs, not where the caller is. That lets us see what will happen in file3.py here:
print("My list len2 from outside:", file2.my_list.len2())
It doesn't matter where the list comes from. What matters is where the attribute access occurs. In the first line there, the attribute access occurs in the scope of file3, so it fails. It doesn't matter whether you write `other_list.len2` or `file2.my_list.len2`, both are equivalent to: - get a list instance (file2.my_list, but it could be anything) - doesn't matter where the instance comes from - look up the method "len2" on that instance **in the file3 scope** Hence it fails. The line that follows:
print("My list coolness from outside:", file2.coolness(file2.my_list))
does this: - get a list instance (file2.my_list) - pass it to file2.coolness - which looks up the method "len2" **in the file2 scope** and hence it succeeds.
"It is going to be super confusing if calling the same function with the same *name* gives different results depending on which file you're in." -- me, 25 years ago or so, when I first started learning Python And probably you to. Attribute lookups are just another form of name lookup. Name lookups depend on the current execution scope, not the caller's scope. With extension methods, so do attribute lookups. If you can cope with name lookups, you can cope with extension methods. [...]
But what if the extension is an override of an existing method? Is that not allowed?)
In C#, you can define extension methods with the same name as an existing method, but they will never be seen. The extension methods are only used if the normal method lookup fails.
In addition, if there is a difference between my_list and other_list,
No, absolutely not. It isn't two different sorts of list. Here is some pseudo-code that might help clarify the behaviour. _getattr = getattr # original that we know and love def getattr(obj, name): # called on obj.name # New, improved version try: return _getattr(obj, name) except AttributeError: if current execution scope is using extensions for type(obj): return extension_method(type(self), name) Think of that as a hand-wavy sketch of behaviour, not an exact specification.
Are both objects lists? If they are, then how can they have different methods?
Here's another sketch of behaviour. class object: def __getattr__(self, name): # only called if the normal obj.name fails if current execution scope is using extensions for type(self): return extension_method(type(self), name) There's nothing in this proposal that isn't already possible. In fact, I reckon that some of the big frameworks like Django and others probably already do stuff like this behind the scene. The `extension_method(type, name)` look up could be nothing more than a dictionary keyed with types: {list: {'flatten': <function at 0x123456abcd>, ...}, int: { ... }, } I'm not really sure how to implement this test: if current execution scope is using extensions for type(self) I have some ideas, but I'm not sure how viable or fast they would be.
And yet the legions of people using C#, Java, Swift, TypeScript, Kotlin, and others find it invaluable. One of the most popular libraries in computing, LINQ, works through extension methods. COME FROM was a joke command in a joke language, Intercal. Nobody really used it except as a joke. For you to compare a powerful and much-loved feature used by millions of programmers to COME FROM is as perfect a demonstration of the Blub factor in action. It may be that there are technical reasons why extension methods are not viable in Python, but "it's almost as bad as COME FROM" is just silly. Have a bit of respect for the people who designed extension methods, implemented them in other languages, and use them extensively. They're not all idiots. And for that matter, neither are Python programmers. Do we really believe that Python programmers are too dim witted to understand the concept of conditional attribute access? "Descriptors, async, multiple inheritance, comprehensions, circular imports, import hooks, context managers, namespaces, threads, multiprocessing, generators, iterators, virtual subclasses, metaclasses, I can understand all of those, but *conditional attribute access* makes my brain explode!!!" I don't believe it for a second.
Existing mechanisms like __getattribute__ are not parallel at all.
I just demonstrated that it would be plausible to implement extension methods via `__getattr__`. Perhaps not efficiently enough to be useful, perhaps not quite with the semantics desired, but it could be done. -- Steve

On 2021-06-23 03:02, Steven D'Aprano wrote:
But that's the thing, they aren't. You gave a bunch of examples of lexical scope semantics with imports and function locals vs globals. But attribute lookups do not work that way. Attribute lookups are defined to work via a FUNCTION CALL to the __getattribute__ (and thence often the __getattr__) of the OBJECT whose attribute is being looked up. They do not in any way depend on the name via which that object is accessed. Now of course you can say that you want to make a new rule that throws the old rules out the window. We can do that for anything. We can define a new rule that says now when you do attribute lookups it will call a global function called attribute_lookups_on_tuesdays if it's a Tuesday in your timezone. But what I'm saying is that the way attribute lookups currently work is not the same as the way bare-name lookups work, because attribute lookups are localized to the object (not the name!) and bare-name lookups are not. I consider this difference fundamental to Python. It's why locals() isn't really how local name lookups work (which came up elsewhere in this thread). It's why you can't magically hook into "x = my_obj" and create some magical behavior that depends on my_obj. Attribute lookups are under the control of the object; they come after the scope-based name resolution is all over with and they don't use the scope-based rules. As for other languages, you keep referencing them as if the fact that something known as "extension methods" exists in those other languages makes it self-evident that it would be useful in Python. Python isn't those other languages. I'm not familiar with all of the other languages you mentioned, but I'll bet that at least some of them do not have the same name/attribute lookup rules and dunder-governed object-customization setup as Python. So that's the difference. The fact that extension methods happen to exist and be useful in those languages is really neither here nor there. The attribute lookup procedure you are proposing is deeply inconsistent with the way Python currently does attribute lookup and currently does other things (like operator overloading), and doesn't fit into Python's overall mechanism of object-based hooks. A spatula attachment may be useful on a kitchen mixer; that doesn't mean it's a good idea to add one to your car's dashboard. Apart from that, I will say that I also don't generally assume that because other languages have a feature it's good or worth considering. Some languages are better designed than others. I think Python is a very well designed language. Certainly we can learn from other languages, but even apart from the issues of "fit" that I describe above, the mere fact that some feature is available or even considered useful in other languages doesn't by itself even convince me it's a good idea at all. It could just be a mistake. We need to specifically show that this will make writing and/or reading code easier and better in Python, and I think this proposal would do the opposite, making code harder to read. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On Wed, Jun 23, 2021 at 11:22:26AM -0700, Brendan Barnwell wrote:
Of course attribute lookups are another form of name lookup. One hint that this is the case is that some languages, such as Java, call attributes *variables*, just like local and global variables. Both name and attribute lookups are looking up some named variable in some namespace. This shouldn't be controversial. When you look up bare name: x the interpreter follows the LEGB rule and looks for `x` in the local scope, the enclosing scope, the global scope and then the builtin scope. There are some complications to do with locals and class scopes, and comprehensions, etc, but fundamentally you are searching namespaces for names. Think of the LEGB rule as analogous to an MRO. When you look up a dotted name: obj.x the interpreter looks for `x` in the instance scope, the class scope, and any superclass scopes. The detailed search rules are different, what with decorators, inheritance, etc, the MRO could be indefinitely long, and there are dynamic attributes (`__getattr__`) too. But fundamentally, although the details are different, attribute access and name lookup are both looking up names in namespaces.
There's that weird obsession with "function call" again. Why do you think that makes a difference?
They do not in any way depend on the name via which that object is accessed.
Ummmm... yes? Why is that relevant?
Now of course you can say that you want to make a new rule that throws the old rules out the window.
That's pure FUD. Extension methods don't require throwing the old rules out the window. The old rules continue to apply.
If you want to do that then go ahead and propose it in another thread, but I don't want anything like that strawman.
You might have heard of something called "inheritance". And dynamic attributes. Maybe even mixins or traits. Attribute lookups are not localised to the object.
(not the name!) and bare-name lookups are not. I consider this difference fundamental to Python.
There are plenty of differences between name lookups and attribute lookups, but locality is not one of them.
It's why locals() isn't really how local name lookups work (which came up elsewhere in this thread).
I don't see the relevance to extension methods.
I don't see the relevance to extension methods. You seem to be just listing random facts as if they were objections to the extension method proposal. You're right, we can't hook into assignment to a bare name. So what? We *can* hook into attribute lookups.
I think that's a difference that makes no difference. What if I told you that it is likely that Python's name/attribute lookup rules and dunder-governed object-customization are the key features that would make extension methods possible with little or no interpreter support? As I described in a previous post, adding extension methods would be a very small change to the existing attribute lookup rules. The tricky part is to come up with a fast, efficient registration system for determining when to use them.
The fact that extension methods happen to exist and be useful in those languages is really neither here nor there.
Extension methods don't just "happen to exist", they were designed to solve a real problem. That's a problem that can apply to Python just as much as other languages. Your argument here sounds like Not Invented Here. "Sure they're useful, in *other* (lesser) languages, not in Python! We never have any need to extend classes with new methods -- apart from all the times we do, but they don't count."
"Deeply inconsistent" in what way? `__getattr__` can already do **literally anything** on attribute lookups? Anything that you can do in Python, you can do in a `__getattr__` method. Including adding arbitrary new methods.
Fortunately this is not a proposal to add spatulas to car dashboards. It is a proposal to add a mechanism to extend classes with extra functionality -- an extremely common activity in OOP. [...]
I think this proposal would do the opposite, making code harder to read.
Okay. Here's a method call with a regular method: obj.method(arg) # easy to read, understandable Here's a method call with a extension method: obj.method(arg) # unreadable, incomprehensible garbage Yes, you're absolutely right, the second is *so much harder to read*. Seriously, there's a time to realise when arguments against a feature devolve down to utterly spurious claims that Python programmers are idiots who will be confused by: from extensions use flatten mylist.flatten() but can instantly understand: from extensions import flatten flatten(mylist) If you can understand importing functions, you can understand using extension methods. If anything, extension methods are simpler: there's not likely to be all the complications of: - circular imports, which are tricky even for seasoned Pythonistas; - differences between standard import and import from, which often trips beginners up; - absolute and relative imports; - regular and namespace packages; and other quirks of importing. Compared to all of those, extension methods are likely to be simple and straightforward. -- Steve

On 2021-06-24 20:59:31, Steven D'Aprano wrote:
Does this mean importing a module can modify other objects, including builtins? Should this spooky-action-at-a-distance be encouraged? OTOH, this already happens in the stdlib with rlcompleter, I assume using monkey-patching. This is a special case for interactive use, though. https://docs.python.org/3/library/rlcompleter.html

On 6/24/21 7:09 AM, Simão Afonso wrote:
Yes, importing a module runs the global code in that module, and that code can not only define the various things in that module but can also manipulate the contents of other modules. This doesn't mean that spooky-action-at-a-distance is always good, but sometimes it is what is needed. You need to be aware of the power that you wield. -- Richard Damon

Steven, you're making a pretty good case here, but a couple questions: 1) The case of newer versions of python adding methods to builtins, like .bit_length is really compelling. But I do wonder how frequently that comes up. It seems to me on this list, that people are very reluctant to add methods to anything builtin (other than dunders) -- particularly ABCs, or classes that might be subclassed, as that might break anything that currently uses that same name. Anyway, is this a one-off? or something that is likely to come up semi-frequently in the future? Note also that the py2 to py3 transition was (Hopefully) an anomaly -- more subtle changes between versions make it less compelling to support old versions for very long. 2) Someone asked a question about the term "Extension Methods" -- I assume it's "Extension attributes", where the new attribute could be anything, yes? 2) Comprehensibility: Seriously, there's a time to realise when arguments against a feature
No -- we're not assuming Python users are idiots -- there is an important difference here: from extensions import flatten flatten(mylist) very clearly adds the name `flatten` to the current module namespace. That itself can be confusing to total newbies, but yes, you can't get anywhere with Python without knowing that. Granted, you still need need to know what `flatten` is, and what it does some other way in any case. Whereas: from extensions use flatten mylist.flatten() does NOT import the name `flatten` into the local namespace -- which I suppose will be "clear" because it's using "use" rather than a regular import, but that might be subtle. But importantly, what it has done is add a name to some particular type -- what type? who knows? In this example, you used the name "mylist", so I can assume it's an extension to list. But if that variable were called "stuff", I"d have absolutely no idea. And as above, you'd need to go find the documentation for the flatten extension method, just as you would for any name in a module, but somehow functions feel more obvious to me. Thinking about this I've found what I think is a key issue for why this may be far less useful for Python that it is for other languages. Using teh "flatten" example, which I imagine you did due to the recent discussion on this list about the such a function as a potential new builtin: Python is dynamically Polymorphic (I may have just made that term up -- I guess it's the same as duck typed) -- but what that means in that context is that I don't usually care exactly what type an object is -- only that it supports particular functionality, so, for instance: from my_utilities import flatten def func_that_works_with_nested_objects(the_things): all_the_things_in_one = flatten(the_things) ... Presumably, that could work with any iterable with iterables in it. but from my_extensions use flatten def func_that_works_with_nested_objects(the_things): all_the_things_in_one = the_things.flatten OOPS! that's only going to work with actual lists. Are you thinking that you could extend an ABC? Or if not that, then at least a superclass and get all subclasses? I'm a bit confused about how the MRO might work there. Anyway, in my mind THAT is the big difference between Python and at least mony of the languages that support extension methods. A "solution" would be to do what we do with numpy -- it has an "asarray()" function that is a no-op if the argument is a numpy array, and creates an array if it's not. WE often put that at the top of a function, so that we can then use all the nifty array stuff inside the function, but not requires the caller to create an array firat. But that buys ALL the numpy functionality, it would be serious overkill for a method or two. It's not a reason it couldn't work, or be useful, but certainly a lot less useful than it might be. In fact, the example for int.bit_length may b e the only compelling use case -- not that method per se, but a built-in type that is rarely duck-typed. That would be integers, floats and strings, at least those are the most common. even ints and floats are two types that are frequently used interchangeably. Side note: I don't see the relevance to extension methods. You seem to be just
listing random facts as if they were objections to the extension method proposal.
Let's keep this civil, and assume good intentions -- if something is irrelevant, it's irrelevant, but please don't assume that the argument was not made in good faith. For my part I've been following this thread, but only recently understood the scope of the proposal well enough to know that e.g. the above issue was not relevant. -Chris B -- Christopher Barker, PhD (Chris) Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 2021-06-24 09:19:36, Christopher Barker wrote:
This explicit namespacing is an important objection, IMHO. What about this syntax:
from extensions use flatten in list
And symmetrically:
from extensions use * in list
Going even further (maybe too far):
from extensions use * in *

On Thu, Jun 24, 2021 at 09:19:36AM -0700, Christopher Barker wrote:
Of course there is no way of knowing how conservative future Steering Councils will be. As Python gets older, all the obviously useful methods will already exist, and the rate of new methods being added will likely decrease. But if you look at the docs for builtins: https://docs.python.org/3/library/stdtypes.html and search for "New in version" you will get an idea of how often builtins gain new methods since Python 3. Similarly for other modules.
Large frameworks and libraries will surely continue to support a large range of versions, even as they drop support for 2.7.
2) Someone asked a question about the term "Extension Methods" -- I assume it's "Extension attributes", where the new attribute could be anything, yes?
In principle, sure. That's a design question to be agreed upon. [...]
Indeed. There's nothing "very clearly" about that until you have learned how imports work and why `from... import...` is different from `import`.
Correct -- it "imports" it into the relevant class namespace, or whatever language people will use to describe it.
That's a fair observation. Do you think that C# uses of LINQ are confused by what is being modified when they say this? using System.Linq; Of course extension methods will be *new and different* until they become old and familiar. It may be that the syntax will make it clear not just where you are "importing" extensions from but what types will be included. That is an excellent point to raise, thank you.
Yes, and we can write obfuscated variable names in Python today too :-)
You're not wrong. I dare say that there will be a learning curve involved with extension methods, like any new technology. If you've never learned about them, it might be confusing to see: mylist.flatten() in code and then try `help(list.flatten)` in the interactive interpreter and get an AttributeError exception, because you didn't notice the "using" at the top of the module. But how is that different from seeing: now = time() in code and then `help(time)` raises a NameError because you didn't notice the import at the top of the module? There's a learning curve in learning to use any tool, and that includes learning to program. [...]
If ABCs use normal attribute lookup, there's no reason why extension methods shouldn't work with them. -- Steve

On 2021-06-21 12:49 p.m., Chris Angelico wrote:
Quite the opposite. You ask the local module (the one that the code was compiled in), and the module decides whether/when to ask the object itself. In other words, every foo.bar would be sugar for __getattr__(foo, "bar") (where __getattr__ defaults to builtins.getattr) instead of being sugar for <builtins.getattr>(foo, "bar") (where <> is used to indicate that it doesn't quite desugar that way - otherwise you'd need to recursively desugar it to builtins.getattr(builtins, "getattr") which uh, doesn't work.)

On Tue, Jun 22, 2021 at 3:55 AM Soni L. <fakedme+py@gmail.com> wrote:
Thanks for clarifying. This doesn't change the problem though - it just changes where the issue shows up. (BTW, what you're describing is closer to __getattribute__ than it is to __getattr__, so if you're proposing this as the semantics, I strongly recommend going with that name.) So, here's the question - a clarification of what I asked vaguely up above. Suppose you have a bunch of these extension methods, and a large project. How are you going to register the right extension methods in the right modules within your project? You're binding the functionality to the module in which the code was compiled, which will make exec/eval basically unable to use them, and that means you'll need some way to set them in each module, or to import the setting from somewhere else. How do you propose doing this? ChrisA

On 2021-06-21 3:01 p.m., Chris Angelico wrote:
Oh, sorry, thought __getattribute__ was the fallback and __getattr__ the one always called, what with getattr -> __getattr__. But yeah, __getattribute__ then.
For exec/eval you just pass in the locals: exec(foo, globals(), locals()) because this __getattribute__ is just a local like any other. As for each module, you'd import them. But not quite with "import": import extension_methods # magic module, probably provides an @extend(class_) e.g. @extend(list) import shallow_flatten import deep_flatten __getattribute__ = extension_methods.getattribute( shallow_flatten.flatten, # uses __name__ deepflatten=deep_flatten.flatten, # name override __getattribute__=__getattribute__, # optional, defaults to builtins.getattr ) This would have to be done for each .py that wants to use the extension methods.

On Mon, Jun 21, 2021 at 3:28 PM Soni L. <fakedme+py@gmail.com> wrote:
I bet you that you could already do this today with a custom import hook. If you want to "easily" experiment with this, I would suggest having a look at https://aroberge.github.io/ideas/docs/html/index.html which likely has all the basic scaffolding that you would need. André Roberge

On Mon, Jun 21, 2021 at 02:54:52PM -0300, Soni L. wrote:
All you've done here is push the problem further along -- how does `__getattr__` (`__getattribute__`?) decide what to do? * Why is this extension-aware version per module, instead of a builtin? * Does that mean the caller has to write it in every module they want to make use of extensions? * Why do we need a second attribute lookup mechanism instead of having the existing mechanism do the work? * And most problematic, if we have an extension method on a type, the builtin getattr ought to pick it up. By the way, per-module `__getattr__` already has a meaning, so this name won't fly. https://www.python.org/dev/peps/pep-0562/ -- Steve

On 2021-06-21 8:42 p.m., Steven D'Aprano wrote:
No, you got it wrong. Extension methods don't go *on* the type being extended. Indeed, that's how they differ from monkeypatching. The whole point of extension methods *is* to be per-module. You could shove it in the existing attribute lookup mechanism (aka the builtins.getattr) but that involves runtime reflection, whereas making a new, per-module attribute lookup mechanism specifically designed to support a per-module feature would be a lot better. Extension methods *do not go on the type*. And sure, let's call it __opcode_load_attr_impl__ instead. Sounds good?

On 2021-06-21 8:57 p.m., Thomas Grainger wrote:
It seems odd that it would be per module and not per scope?
It's unusual to import things at the scope level. Usually things get imported at the module level, so, using module language doesn't seem that bad. But yes, it's per scope, but in practice it's per module because nobody would actually use this per scope even tho they could. :p

I've just thought of a great use-case for extension methods. Hands up who has to write code that runs under multiple versions of Python? *raises my hand* I'm sure I'm not the only one. You probably have written compatibility functions like this: def bit_length(num): try: return num.bit_length() except AttributeError: # fallback implementation goes here ... and then everywhere you want to write `n.bit_length()`, you write `bit_length(n)` instead. Extension methods would let us do this: # compatibility.py @extends(int): def bit_length(self): # fallback implementation goes here ... # mylibrary.py using compatibility num = 42 num.bit_length() Now obviously that isn't going to help with versions too old to support extension methods, but eventually extension methods will be available in the oldest version of Python you care about: # supports Python 3.14 and above Once we reach that point, then backporting new methods to classes becomes a simple matter of using an extension method. No mess, no fuss. As someone who has written a lot of code like that first bit_length compatibility function in my time, I think I've just gone from "Yeah, extension methods seem useful..." to "OMG I WANT THEM TEN YEARS AGO SO I CAN USE THEM RIGHT NOW!!!". Backporting might not be your killer-app for extension methods, but I really do think they might be mine. -- Steve

On Wed, Jun 23 2021 at 20:48:39 +1000, Steven D'Aprano <steve@pearwood.info> wrote:
Of course that means the the standard library might also introduce something new that will be shadowed by one of your custom methods, and then you'll wish you had just used functions or a wrapper class. If you can import extension methods wholesale, you might even be monkeypatching something without realising it, in which case you'll be lucky if things break in an obvious way. Honestly, all the use cases in this thread seem to be much better served by using plain old functions.

On Wed, Jun 23, 2021 at 07:23:19PM +0200, João Santos wrote:
Of course that means the the standard library might also introduce something new that will be shadowed by one of your custom methods,
Extension methods have lower priority than actual methods on the class. So that won't happen. The actual method on the class will shadow the extension method. I'm not sure if you completely understand the use-case I was describing, so let me clarify for you with a concrete example. Ints have a "bit_length" method, starting from Python 2.7. I needed to use that method going all the way back to version 2.4. I have an implementation that works, so I could backport that method to 2.4 through 2.6, except that you can't monkey-patch builtins in Python. So monkey-patching is out. (And besides, I wouldn't want to monkey-patch it: I only need that method in one module. I want to localise the change to only where it is needed.) Subclassing int wouldn't help. I need it to work on actual ints, and any third-party subclasses of int, not just my own custom subclass. (And besides, have you tried to subclass int? It's a real PITA. It's easy enough to write a subclass, but every operation on it returns an actual int instead of the subclass. So you have to write a ton of boilerplate to make int subclasses workable. But I digress.) So a subclass is not a good solution either. That leaves only a function. But that hurts code readability and maintainance. In 2.7 and above, bit_length is a method, not a function. All the documentation for bit_length assumes it is a method. Every tutorial that uses it has it as a method. Other code that uses it treats it as a method. Except my code, where it is a function. Using a function is not a *terrible* solution to the problem of backporting a new feature to older versions of Python. I've done it dozens of times and it's not awful. **But it could be better.** Why can't the backport be a method, just like in 2.7 and above? With extension methods, it can be. Obviously not for Python 2.x code. But plan for the future: if we have extension methods in the language, eventually every version of Python we care about will support it. And then writing compatibility layers will be much simpler.
and then you'll wish you had just used functions or a wrapper class.
Believe me, I won't. I've written dozens of compatibility functions over the last decade or more, going back to Python 2.3. I've written hybrid 2/3 code. Extension methods would not always be useful, but for cases like int.bit_length, it would be a far superior solution.
If you can import extension methods wholesale, you might even be monkeypatching something without realising it
Extension methods is not monkey-patching. It is like adding a global name to one module. If I write: def func(arg): ... in module A.py, that does not introduce func to any other module unless those other modules explicitly import it. Extension methods are exactly analogous: they are only visible in the module where you opt-in to use them. They don't monkey-patch the entire interpreter-wide environment. And because extension methods have a lower priority than actual methods, you cannot override an existing method on a class. You can only extend the class with a new method. -- Steve

Oops, sorry, I neglected to trim my response to João. Please ignore my previous response, with the untrimmed quoting at the start, and give any replies to this. On Wed, Jun 23, 2021 at 07:23:19PM +0200, João Santos wrote:
Of course that means the the standard library might also introduce something new that will be shadowed by one of your custom methods,
Extension methods have lower priority than actual methods on the class. So that won't happen. The actual method on the class will shadow the extension method. I'm not sure if you completely understand the use-case I was describing, so let me clarify for you with a concrete example. Ints have a "bit_length" method, starting from Python 2.7. I needed to use that method going all the way back to version 2.4. I have an implementation that works, so I could backport that method to 2.4 through 2.6, except that you can't monkey-patch builtins in Python. So monkey-patching is out. (And besides, I wouldn't want to monkey-patch it: I only need that method in one module. I want to localise the change to only where it is needed.) Subclassing int wouldn't help. I need it to work on actual ints, and any third-party subclasses of int, not just my own custom subclass. (And besides, have you tried to subclass int? It's a real PITA. It's easy enough to write a subclass, but every operation on it returns an actual int instead of the subclass. So you have to write a ton of boilerplate to make int subclasses workable. But I digress.) So a subclass is not a good solution either. That leaves only a function. But that hurts code readability and maintainance. In 2.7 and above, bit_length is a method, not a function. All the documentation for bit_length assumes it is a method. Every tutorial that uses it has it as a method. Other code that uses it treats it as a method. Except my code, where it is a function. Using a function is not a *terrible* solution to the problem of backporting a new feature to older versions of Python. I've done it dozens of times and it's not awful. **But it could be better.** Why can't the backport be a method, just like in 2.7 and above? With extension methods, it can be. Obviously not for Python 2.x code. But plan for the future: if we have extension methods in the language, eventually every version of Python we care about will support it. And then writing compatibility layers will be much simpler.
and then you'll wish you had just used functions or a wrapper class.
Believe me, I won't. I've written dozens of compatibility functions over the last decade or more, going back to Python 2.3. I've written hybrid 2/3 code. Extension methods would not always be useful, but for cases like int.bit_length, it would be a far superior solution.
If you can import extension methods wholesale, you might even be monkeypatching something without realising it
Extension methods is not monkey-patching. It is like adding a global name to one module. If I write: def func(arg): ... in module A.py, that does not introduce func to any other module unless those other modules explicitly import it. Extension methods are exactly analogous: they are only visible in the module where you opt-in to use them. They don't monkey-patch the entire interpreter-wide environment. And because extension methods have a lower priority than actual methods, you cannot override an existing method on a class. You can only extend the class with a new method. -- Steve

On Thu, Jun 24, 2021 at 7:51 PM Steven D'Aprano <steve@pearwood.info> wrote:
You've given some great arguments for why (5).bit_length() should be allowed to be a thing. (By the way - we keep saying "extension METHODS", but should this be allowed to give non-function attributes too?) But not once have you said where getattr(), hasattr(), etc come into this. The biggest pushback against this proposal has been the assumption that getattr(5, "bit_length")() would have to be the same as (5).bit_length(). Why is that necessary? I've never seen any examples of use-cases for that. Let's tighten this up into a real proposal. (I'm only +0.5 on this, but am willing to be swayed.) * Each module has a registration of (type, name, function) triples. * Each code object is associated with a module. * Compiled code automatically links the module with the code object. (If you instantiate a code object manually, it's on you to pick a module appropriately.) * Attribute lookups use three values: object, attribute name, and module. * If the object does not have the attribute, its MRO is scanned sequentially for a registered method. If one is found, use it. Not mentioned in this proposal: anything relating to getattr or hasattr, which will continue to look only at real methods. There may need to be an enhanced version of PyObject_GetAttr which is able to look up extension methods, but the current one simply wouldn't. Also not mentioned: ABC registration. If you register a class as a subclass of an ABC and then register an extension method on that class, isinstance() will say that it's an instance of the ABC, but the extension method won't be there. I'm inclined to say "tough luck, don't do that", but if there are strong enough use cases, that could be added. But otherwise, I would FAR prefer a much simpler proposal, one which changes only the things that need to be changed. ChrisA

Here's a quick and dirty proof of concept I knocked up in about 20 minutes, demonstrating that no deep compiler magic is needed. It's just a small change to the way `object.__getattribute__` works. I've emulated it with my own base class, since `object` can't be monkey-patched. The proof of concept is probably buggy and incomplete. It isn't intended to be a final, polished production-ready implementation. It's not implementation-agnostic: it requires the ability to inspect the call stack. If you're using IronPython, this may not work. You will notice I didn't need to touch getattr to have it work, let alone hack the interpreter to make it some sort of magical construct. It all works through `__getattribute__`. The registration system is just the easiest thing that I could throw together. There are surely better designs. Run A.py to see it in action. -- Steve

On Fri, Jun 25, 2021 at 3:31 AM Steven D'Aprano <steve@pearwood.info> wrote:
Okay, so you've hidden the magic away a bit, but you have to choose the number [2] for your stack inspection. That means you have to be sure that that's the correct module, in some way. If you do *anything* to disrupt the exact depth of the call stack, that breaks. _hasattr = hasattr def hasattr(obj, attr): return _hasattr(obj, attr) Or any of the other higher level constructs. What if there's a C-level function in there? This is still magic. It's just that the magic has been buried slightly. ChrisA

I've read all the posts in this thread, and am overall at least -0.5 on the idea. I like methods well enough, but mostly it just seems to invite confusion versus the equivalent and existing option of importing functions. I am happy, initially, to stipulate that "some clever technique" is available to make accessing an extension method/attribute efficient. My objection isn't that. Rather, my concern is "spooky action at a distance." It becomes difficult to know whether my object 'foo' will have the '.do_blaz()' method or not. Not difficult like no determinate rule could exist, but difficult in the sense that I'm looking at this one line of code in a thousand line module. The type() of 'foo' is no longer enough information to know the answer. That said, one crucial difference is once an extension method is "used" we are stuck with it for the entire module. In contrast, functions can be both namespaced and redefined. So I can do: import alpha, beta if alpha.do_blaz() == beta.do_blaz(): ... I can also do this: from alpha import do_blaz def myfun(): from beta import do_blaz ... We get scoping and namespaces that extension method lack. So perhaps we could add that? from extensions import do_blaz with extend(list, do_blaz): assert isinstance(foo, list) foo.do_blaz() This would avoid the drawbacks I perceive. On the other hand, it feels like ain't a fair amount of complexity for negligible actual gain.

David Mertz writes:
That said, one crucial difference is once an extension method is "used" we are stuck with it for the entire module.
While you get it for all objects of type(foo) in the whole module, presumably whatever syntax you used to "use" it, you can use to get rid of it, or at least replace it with an extension that raises or logs or somehow signals that the extension is deactivated. Or maybe del works on it.
In contrast, functions can be both namespaced and redefined. [...] We get scoping and namespaces that extension method lack.
But isn't the point of an extension to a class that we want it to be class-wide in that module? I don't see why we should turn extensions into something we can already do with functions. (Assuming we want them at all, and so far only Steven's backporting application makes any sense to me).

On Fri, Jun 25, 2021 at 1:54 PM Ricky Teachey <ricky@teachey.org> wrote:
Would this feature allow me to declare str objects as not iterable in some contexts?
If so, +1.
That depends. If the proposal is to intercept every attribute lookup (parallel to a class's __getattribute__ method), then yes, but if it's a fallback after default behaviour fails (like __getattr__), then no. I suspect that the latter is more likely; it's much easier to avoid recursion problems if real attributes are tried first, plus it's likely to impact performance a lot less. ChrisA

On Fri, Jun 25, 2021 at 7:43 PM Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
It also means that this code applies only when other things have failed, so high performance lookups will still be high performance. But there've been many variants of this proposal in this thread, all subtly different. ChrisA

On Thu, Jun 24, 2021 at 11:53:55PM -0400, Ricky Teachey wrote:
While I'm glad to see some more positivity in this thread, alas, no, extension methods are not a mechanism for making strings non-iterable. Nor do they help with floating point inaccuracies, resolve problems with circular imports, or make the coffee *wink* Extension methods are a technology for extending classes with extra methods, not removing them. -- Steve

What objection do you have to creating a subclass? Adding methods to e.g. list could give you problems if you import 2 modules that each modify list and the modifications are incompatible. It could conceivably add bugs to code that uses list in a standard way. Best wishes Rob Cliffe On 20/06/2021 13:18, Johan Vergeer wrote:

On 2021-06-20 at 12:18:24 -0000, Johan Vergeer <johanvergeer@gmail.com> wrote:
I disagree with the premise that such a thing is great, although it does have limited use cases (mostly revolving around bugs, whether you know about the bug or are tracking it down). That said, Python already allows it: >>> class C: pass >>> c = C() >>> C.f = lambda self, *a: a # add method f to class C >>> c.f(5, 6) (5, 6) See also <https://en.wikipedia.org/wiki/Monkey_patch>.

The technique you are calling "extension methods" is known as "monkey-patching" in Python and Ruby. With respect to a fine language, Kotlin, it doesn't have the user-base of either Python or Ruby. Python does not allow monkey-patching builtin classes, but Ruby does: https://avdi.codes/why-monkeypatching-is-destroying-ruby/ A cautionary tale. So what does Kotlin do to prevent that sort of thing? Can you use a regular function that takes a list as argument, instead of monkey-patching the list class to add a method? The beauty of a function is that every module is independent, so they can add their own list extensions (as functions) without stomping over each other. Whereas if they add them directly on to the list class itself, they can clash. As for code readability, I don't think that this: mylist.first_or_none() is particularly more readable than: first_or_none(mylist) Not everything has to be wrapped in a class. https://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html -- Steve

While IDK about Kotlin's extension methods, I do know that C# (A highly popular programming language) also has extension methods so I'll talk about those. In C#, extension methods are plain static functions defined in a plain static class, the only key difference between normal static methods and these extension methods is that the first argument is prefixed with the "this" keyword. You can invoke extension methods as if they were defined on the type they're extending without actually changing the type, the only requirement is that the namespace that the class containing the method is used (via "using"). Infact, .NET ships with a bunch of extension methods just for the IEnumerable and IEnumerable<T> interfaces in System.LINQ.

Hi William, Thanks for the description of C# extension methods, but I think that like Britons and Americans, we're in danger of being divided by a common language. (To paraphrase Churchill.) On Sun, Jun 20, 2021 at 10:56:37PM -0000, William Pickard wrote:
I'm unsure what you mean by "static functions" and whether they are the same thing as "static methods". I believe that a static method is something different in Python and C#. When you say *prefixed by*, surely you don't actually mean a literal prefix? Using Python syntax: # I want the first argument to be called "param". def extension(thisparam): because that would be silly *wink* so I guess that you mean this: def extension(this, param): except that in Python, we spell it "self" rather than "this", and it is not a keyword. So as far as the interpreter is concerned, whether spelled as "this", "self" or something else, that's just a regular function that takes two parameters, neither of which has any special or pre-defined meaning.
Let me see if I can interpret that, in Python terms. Suppose we have a module X.py which defines a list extension method imaginatively called "extension". In order to use that method from my module, I would have to say: using X first, after which: hasattr(list, 'extension') would return True. Otherwise, it would continue to return False. So modules have to opt-in to use the extension method, rather than having the methods foist on them as in monkey-patching. Am I close? I think this sounds much more promising, since it avoids the downsides of monkey-patching. The problem is that in a language like C#, and I presume Koitlin, methods are resolved at compile time, but in Python they are resolved at runtime. So `using` would have to be some sort of registration system, which would require every attribute lookup to go through the registration system looking for only those extension methods which are being used by the current module. I expect that would be slow and complex. But maybe I'm just not clever enough to think of an efficient way of handling it :-( -- Steve

Your assumption about requiring some form of registration system for Python to implement extensions is correct as Roslyn (The C# compiler) resolves them at compile time (as long as the namespace is imported/the class is in the "global" namespace) When I said static function, it's basically me saying static method, and they're the same as a method in python with "@staticmethod" applied to it. here's a sample "prototype" of a System.Linq extension method (a very useful one imo): "namespace System.Linq { public static class Enumerable { public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> enumerable, Func<TSource, TResult> selector); } }" As you can see, the param "enumerable" is prefixed by the "this" keyword, this tells Roslyn to treat "enumerable" as if it was the special implicit "this" operator in instance methods: "using System.Linq; namespace MyNS { public sealed class MyCLS { public static readonly List<int> MyInts = new List<int>() { 0, 5, 12, 56, 9 }; public int RNG = 42; public IEnumerable<int> ExpandInts() { return MyInts.Select(@int => @int * this.RNG); } } }" As you can see in the above to quoted blocks (SOMEONE TELL ME HOW TO DO CODE BLOCKS!), I'm using the extension method "Select" defined in "Enumerable" as if it was defined on the interface "IEnumerable<T>" when it's actually not. (List<T> implements IList<T> which inherits ICollection<T> which inherits IEnumerable<T>) The only requirement is that both the static class AND the method is visible to your code (with public being visible to everyone) Here's the official docs: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and...

On 2021-06-20 7:48 p.m., Steven D'Aprano wrote:
The trick to extension methods is that they're only available when you explicitly use them. In other words, a module/package can't force them on you. The problem with "monkey-patching" is that it *does* get forced on you. With the way Python works, your only option to implement extension methods is to monkey-patch __getattr__. An alternative would be to have a module-level __getattr__, which by default resolves to the object's own attributes, but can be replaced by the module to provide proper (module-local) extension methods functionality. Thus, any module-local extension methods would also take priority over subclasses' attributes, without conflicting with them. (It would also slow everything down af, but that's beside the point.)

On 2021-06-21 12:26 p.m., Stephen J. Turnbull wrote:
Monkey-patching: ```py mod1.py import foo foo.Bar.monkeymethod = ... ``` ```py mod2.py import foo foo.Bar.monkeymethod = ... ``` "Extension methods": ```py mod1.py import foo def __getattr__(o, attr): if isinstance(o, foo.Bar) and attr == "monkeymethod": return ... return getattr(o, attr) ``` ```py mod2.py import foo def __getattr__(o, attr): if isinstance(o, foo.Bar) and attr == "monkeymethod": return ... return getattr(o, attr) ``` Note how the former changes foo.Bar, whereas the latter only changes the module's own __getattr__. You can't have conflicts with the latter. (Also note that this "module's own __getattr__" doesn't provide extension methods by itself, but can be used as a mechanism to implement extension methods.)

On Tue, Jun 22, 2021 at 1:44 AM Soni L. <fakedme+py@gmail.com> wrote:
So what you're saying is that, in effect, every attribute lookup has to first ask the object itself, and then ask the module? Which module? The one that the code was compiled in? The one that is currently running? Both? And how is this better than just using a plain ordinary function? Not everything has to be a method. ChrisA

On Tue, Jun 22, 2021 at 01:49:56AM +1000, Chris Angelico wrote:
Mu. https://en.wikipedia.org/wiki/Mu_(negative)#%22Unasking%22_the_question We don't have an implementation yet, so it is too early to worry about precisely where you look up the extension methods, except that it is opt-in (so by default, there's no additional cost involved) and there must be *some* sort of registry *somewhere* that handles the mapping of extension methods to classes. We certainly don't want this to slow down *every method call*, but if it only slowed down method lookups a little bit when you actually used the feature, that might be acceptable. We already make use of lots of features which are slow as continental drift compared to C, because they add power to the language and are *fast enough*. E.g. name lookups are resolved at runtime, not compile-time; dynamic attribute lookups using gettattribute and getattr dunders; virtual subclasses; generic functions (functools.singledispatch); descriptors.
And how is this better than just using a plain ordinary function? Not everything has to be a method.
You get method syntax, obj.method, which is nice but not essential. When a method is called from an instance, you know that the first parameter `self` has got to be the correct type, no type-checking is required. That's good. And the same would apply to extension methods. You get bound methods as first class values, which is useful. You get inheritance, which is powerful. And you get encapsulation, which is important. I think this is a Blub moment. We don't think it's useful because we have functions, and we're not Java, so "not everything needs to be a method". Sure, but methods are useful, and they do bring benefits that top-level functions don't have. (And vice versa of course.) We have staticmethod that allows us to write a "function" (-ish) but get the benefits of inheritance, encapsulation, and method syntax. This would be similar. We acknowledge that there are benefits to monkey-patching. But we can't monkey-patch builtins and we are (rightly) suspicious of those who use monkey-patching in production. And this is good. But this would give us the benefits of monkey-patching without the disadvantages. *If* we can agree on semantics and come up with a reasonable efficient implementation that doesn't slow down every method call. -- Steve

On Tue, Jun 22, 2021 at 3:43 AM Steven D'Aprano <steve@pearwood.info> wrote:
I'm actually not concerned so much with the performance as the confusion. What exactly does the registration apply to? And suppose you have a series of extension methods that you want to make use of in several modules in your project, how can you refactor a bunch of method registration calls so you can apply them equally in multiple modules? We don't need an implementation yet - but we need clear semantics.
True, all true, but considering that this is *not* actually part of the class, some of that doesn't really apply. For instance, is it really encapsulation? What does that word even mean when you're injecting methods in from the outside?
And that's a very very big "if". Monkey-patching can be used for unittest mocking, but that won't work here. Monkey-patching can be used to fix bugs in someone else's code, but that only works here if *your* code is in a single module, or you reapply the monkey-patch in every module. I'm really not seeing a lot of value in the proposal. Let's completely ignore the performance cost for the moment and just try to figure out semantics, with it being actually useful and not unwieldy. ChrisA

On Tue, Jun 22, 2021 at 03:56:00AM +1000, Chris Angelico wrote:
I'm actually not concerned so much with the performance as the confusion. What exactly does the registration apply to?
Good question. Extension methods have four steps: - you write a method; - declare which class it extends; - the caller declares that they want to use extensions; - and they get looked up at runtime (because we can't do static lookups). The first two can go together. I might write a module "spam.py". Borrowing a mix of Kotlin and/or C# syntax, maybe I write: def list.head(self, arg): ... def list.tail(self, arg): ... or maybe we have a decorator: @extends(list) def head(self, arg): ... The third step happens at the caller site. Using the C# keyword, you might write this in your module "stuff.py": uses spam or maybe there's a way to do it with the import keyword: # could be confused for `from spam import extensions`? import extensions from spam from functools import extension_methods import spam extension_methods.load_from(spam) whatever it takes. Depends on how much of this needs to be baked into the interpreter. Fourth step is that you go ahead and use lists as normal. Whether you use getattr or dot syntax, any extension methods defined in spam.py will show up, as if they were actual list methods. hasattr([], 'head') # returns True list.tail # returns the spam.tail function object (unbound method) They're not monkey-patched: other modules don't see that.
I put the extension modules in one library. That may not literally require me to put their definitions in a single .py file, I should be able to use a package and import extension methods from modules the same as any other object. But for ease of use for the caller, I probably want to make all my related extension methods usable from a single place. Then you, the caller, import/use them from each of your modules where you want to use them: # stuff.py uses spam # things.py uses spam And in modules where you don't want to use them, you just don't use them. [...]
Sure it's encapsulation. We can already do this with non-builtin classes: class SpammySpam: def spam(self, arg): ... from another_module import eggy_method def aardvarks(self, foo, bar): ... SpammySpam.aardvarks = aardvarks The fact that two of those methods have source code that wasn't indented under the class statement is neither here nor there. Even the fact that eggy_method was defined in another module is irrelevant. What matters is that once I've put the class together, all three methods are fully encapsulated into the SpammySpam class, and other classes can define different methods with the same name. Encapsulation is less about where you write the source code, and more about the fact that I can have SpammySpam().spam and Advertising().spam without the two spam methods stomping on each other. [...]
LINQ is a pretty major part of the C# ecosystem. I think that proves the value of extension methods :-) I know we're not really comparing apples with apples, Python's trade-offs are not the same as C#'s trade-offs. But Ruby is a dynamic language like Python, and they use monkey-patching all the time, proving the value of being able to extend classes without subclassing them. Extension methods let us extend classes without the downsides of monkey-patching. Extension methods are completely opt-in while monkey-patching is mandatory for everyone. If we could only have one, extension methods would clearly be the safer choice. We don't make heavy use of monkey-patching, not because it isn't a useful technique, but because: - unlike Ruby, we can't extend builtins without subclassing; - we're very aware that monkey-patching is a massively powerful technique with huge foot-gun potential; - and most of all, the Python community is a hell of a lot more conservative than Ruby. Even basic techniques intentionally added to the language (like being able to attach attributes onto function objects) are often looked at as if they were the worst kind of obfuscated self-modifying code. Even when those same techniques are used in the stdlib people are still reluctant to use it. As a community, we're like cats: anything new and different scares us, even if its actually been used for 30 years. We're a risk-adverse community. -- Steve

On 2021-06-21 9:39 p.m., Steven D'Aprano wrote:
Python is a dynamic language. Maybe you're using hasattr/getattr to forward something from A to B. If "other modules don't see that" then this must work as if there were no extension methods in place. So you actually wouldn't want the local load_attr override to apply to those. If you did... well, just call the override directly. If the override was called __opcode_load_attr_impl__ you'd just call __opcode_load_attr_impl__ directly instead of going through getattr. There needs to be an escape hatch for this. Or you *could* have getattr be special (called by load_attr) and overridable, and builtins.getattr be the escape hatch, but nobody would like that.

I'm sorry Soni, I don't understand what you are arguing here. See below. On Mon, Jun 21, 2021 at 10:09:17PM -0300, Soni L. wrote:
What's "forward something from A to B" mean? What are A and B? If "this" (method lookups) "must work as if there were no extension methods in place" then extension methods are a no-op and are pointless. You write an extension method, register it as applying to a type, the caller opts-in to use it, and then... nothing happens, because it "must work as if there were no extension methods in place". Surely that isn't what you actually want to happen. But if not, I have no idea what you mean. The whole point of extension methods is that once the caller opts in to use them, method look ups (and that includes hasattr and getattr) must work as if the extension methods **are in place**. The must be no semantic difference between: obj.method(arg) and getattr(obj, 'method')(arg) regardless of whether `method` is a regular method or an extension method.
I have no idea what that means. What is "the local load_attr override"?
As a general rule, you should not be calling dunders directly. You seem to have missed the point that extension methods are intended as a mechanism to **extend a type** by giving it new methods on an opt-in basis. I want to call them "virtual methods" except that would add confusion regarding virtual subclasses and ABCs etc. Maybe you need to read the Kotlin docs: https://kotlinlang.org/docs/extensions.html and the C# docs: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and... Wikipedia also has a broad overview from a language-agnostic perspective: https://en.wikipedia.org/wiki/Extension_method Here's an example in TypeScript and Javascript: https://putridparrot.com/blog/extension-methods-in-typescript/ In particular note these comments: # Kotlin "Such functions are available for calling in the usual way as if they were methods of the original class." # C# "Extension methods are only in scope when you explicitly import the namespace into your source code with a using directive." Both C# and Kotlin are statically typed languages, and Python is not, but we ought to aim to minimise the differences in semantics. Aside from extension methods being resolved at runtime instead of at compile time, the behaviour ought to be as close as possible. Just as single dispatch in Python is resolved dynamically, but aims to behave as close as possible to single dispatch in statically typed languages. Another important quote: "Because extension methods are called by using instance method syntax, no special knowledge is required to use them from client code. To enable extension methods for a particular type, just add a `using` directive for the namespace in which the methods are defined." "No special knowledge is required" implies that, aside from the opt-in step itself, extension methods must behave precisely the same as regular methods. That means they will be accessible as bound methods on the instance: obj.method and unbound methods (functions) on the type: type(obj).method and using dynamic lookup: getattr(obj, 'method') and they will fully participate in inheritance heirarchies if you have opted in to use them.
There needs to be an escape hatch for this.
The escape hatch is to *not* opt-in to the extension method. If the caller doesn't opt-in, they don't get the extension methods. That is the critical difference between extension methods and monkey- patching the type. Monkey-patching effects everyone. Extension methods have to be opt-in.
Huh? Unless you have shadowed getattr with a module-level function, getattr *is* builtins.getattr. -- Steve

On Tue, Jun 22, 2021 at 8:01 PM Steven D'Aprano <steve@pearwood.info> wrote:
And this is a problem. How is getattr defined? Is it counted as being in the current module? If it is, then has getattr magically become part of the module it's called from? Or do ALL lookups depend on where the function was called, rather than where it's defined? If 'method' is an extension method, where exactly is it visible? ChrisA

On Tue, Jun 22, 2021 at 09:12:53PM +1000, Chris Angelico wrote:
If its a problem for getattr, it is a problem for dot syntax, because they are essentially the same thing.
How is getattr defined?
The same as it is defined now, except with some minor tweaks to support extension methods. Do you remember when we introduced `__slots__` (version 2.2 or 2.3, I think?), and added a whole new mechanism to look up ordinary attributes and slot attributes? No, neither do I, because we didn't. We have a single mechanism for looking up attributes, including methods, which works with instances and classes, descriptors and non-descripters, C-level slots and Python `__slots__` and `__dicts__` and `__getattr__` and `__getattribute__`, and I am absolutely positively sure that if Python ever adds a new implementation for attribute lookup, it will still be handled by the same getattr mechanism, which is built into the interpreter.
Is it counted as being in the current module?
`getattr`? No, that's a builtin. You can shadow it or delete it if you want, it's just a public API to the underlying functionality built into the interpreter. Dot syntax won't be affected.
Can you be a bit more precise? I'm not suggesting that we introduce dynamic scoping instead of lexical scoping, if that's what you mean. Attribute lookups already depend on the state of the object at the time the lookup is made. This is just more of the same. class K: pass K.attr # this is an AttributeError K.attr = 'extension' K.attr # this is fine
If 'method' is an extension method, where exactly is it visible?
I believe that TypeScript uses "import" for this, so it would be visible from anywhere that imports it: https://putridparrot.com/blog/extension-methods-in-typescript/ -- Steve

On Tue, Jun 22, 2021 at 9:56 PM Steven D'Aprano <steve@pearwood.info> wrote:
Ahh but that is precisely the problem.
Do those tweaks include reaching back into the module that called it? How magical will it be?
Let me clarify then. We shall assume for the moment that the builtins module does not have any extension methods registered. (I suppose it could, but then you get all the action-at-a-distance of monkey-patching AND the problems of extension methods, so I would hope people don't do this.) This means that the getattr() function, being a perfectly straight-forward function, is not going to see any extension methods. Okay then. # whatever the actual syntax is @extend(list) def in_order(self): return sorted(self) stuff = [1, 5, 2] stuff.in_order() # == [1, 2, 5] getattr(stuff, "in_order")() # AttributeError Does the getattr function see the extension methods? If so, which? If not, how can getattr return the same thing as attribute lookup does? How do you inform getattr of which extension methods it should be looking at? And what about this? f = functools.partial(getattr, stuff) f("in_order") NOW which extension methods should apply? Those registered here? Those registered in the builtins? Those registered in functools? Yes, monkey-patching *is* cleaner, because the object is the same object no matter how you look it up. (Oh, and another wrinkle, although a small one: Code objects would need to keep track of their modules. Currently functions do, but code objects don't. But that seems unlikely to introduce further complications.) ChrisA

On Tue, Jun 22, 2021 at 10:25:33PM +1000, Chris Angelico wrote:
Is it? Don't be shy. Tell us what the problem is and why its a problem.
I thought you agreed that we didn't need to discuss implementation until we had decided on the desired semantics? Let's just say it will be a well-defined, totally non-magical implementation (like everything else in Python) that manages to be equally efficient as regular attribute access. A high bar to set. Implementation issues may require us to dial that back a bit, or might even rule out the concept altogether, but let's start off by assuming the best and decide on the semantics first. [...]
Let me clarify then.
Thank you, that would be helpful.
How do you know that the builtins aren't already using extension methods? Let's pretend that you didn't know that CPython's implementation was C rather than C#. Or that C has support for something similar to extension methods. (I daresay you could simulate it, somehow.) Or that we're talking about IronPython, for example, which is implemented in C#. I might tell you that list.sort and list.index are regular methods, and that list.append and list.reverse are extension methods. Short of looking at the source code, there would be absolutely no way for you to tell if I were correct or not. So if builtins used extension methods, that would be indistinguishable from builtins not using extension methods. (To be pedantic: this would only be true if those extension methods were added at interpreter startup, before the interpreter ran any user code. Otherwise you could take a snapshot of `dir(list)` before and after, and inspect the differences.) To be clear, this is distinct from *a user module* using extension methods on a builtin type, which is normal and the point of the exercise. Do I need to explain the difference between the interpreter using extension methods as part of the builtins implementation, and user- written modules ("spam.py") extending builtin classes with extension types? Because they are completely different things.
This means that the getattr() function, being a perfectly straight-forward function, is not going to see any extension methods.
Does getattr see slots (both C-level and Python)? Yes. Does it see attributes in instance and class dicts? Yes. Does it see dynamic attributes that use `__getattr__`? Yes. Does it understand the descriptor protocol? Yes. It does everything else dot notation does. Why wouldn't it see extension methods? (Apart from spite.) The getattr builtin is just a public interface to whatever internal function or functions the interpreter uses to look up attributes.
What reason do you have for thinking that would be how it works? Not a rhetorical question: is that how it works in something like Swift, or Kotlin?
Does the getattr function see the extension methods? If so, which?
Yes, and the same ones you would see if you used dot syntax.
If not, how can getattr return the same thing as attribute lookup does?
I think you've just answered your own question. getattr has to return the same thing as attribute lookup, because if it didn't, it wouldn't be returning the same thing as attribute lookup, which is getattr's reason to exist.
How do you inform getattr of which extension methods it should be looking at?
You don't. You inform the interpreter tha you are opting in to use extension methods on a type, the interpreter does whatever it needs to do to make it work (implementation), and then it Just Works™.
partial is just a wrapper around its function argument, so that should behave *exactly* the same as `getattr(stuff, 'in_order')`.
Yes, monkey-patching *is* cleaner, because the object is the same object no matter how you look it up.
Oh for heaven's sake, I'm not proposing changes to Python's object identity model! Please don't invent bogus objections that have no basis in the proposal. The id() function and `is` operator will work exactly the same as they do now. Classes with extension methods remain the same object. The only difference is in attribute lookups.
(Oh, and another wrinkle, although a small one: Code objects would need to keep track of their modules.
Would they? I'm not seeing the connection between code objects used by functions and attribute lookups. Perhaps they would, but it's not clear to me what implementation you are thinking of when you make this statement. `getattr` doesn't even have a `__code__` attribute, and neither do partial objects. -- Steve

On Wed, Jun 23, 2021 at 11:40 AM Steven D'Aprano <steve@pearwood.info> wrote:
That's exactly what the rest of the post is about.
Semantics are *exactly* what I'm talking about.
Okay. Lemme give it to you *even more clearly* since the previous example didn't satisfy. # file1.py @extend(list) def in_order(self): return sorted(self) def frob(stuff): return stuff.in_order() # file2.py from file1 import frob thing = [1, 5, 2] frob(thing) # == [1, 2, 5] def otherfrob(stuff): return stuff.in_order() otherfrob(thing) # AttributeError Am I correct so far? The function imported from file1 has the extension method, the code in file2 does not. That's the entire point here, right? Okay. Now, what if getattr is brought into the mix? # file3.py @extend(list) def in_order(self): return sorted(self) def fetch1(stuff, attr): if attr == "in_order": return stuff.in_order if attr == "unordered": return stuff.unordered return getattr(stuff, attr) def fetch2(stuff, attr): return getattr(stuff, attr) # file4.py from file3 import fetch1, fetch2 import random @extend(list) def unordered(self): return random.shuffle(self[:]) def fetch3(stuff, attr): if attr == "in_order": return stuff.in_order if attr == "unordered": return stuff.unordered return getattr(stuff, attr) def fetch4(stuff, attr): return getattr(stuff, attr) thing = [1, 5, 2] fetch1(thing, "in_order")() fetch2(thing, "in_order")() fetch3(thing, "in_order")() fetch4(thing, "in_order")() fetch1(thing, "unordered")() fetch2(thing, "unordered")() fetch3(thing, "unordered")() fetch4(thing, "unordered")() Okay. *NOW* which ones raise AttributeError, and which ones give the extension method? What exactly are the semantics of getattr? Is it a magical function that can reach back into the module that called it, or is it actually a function of its own? And if getattr is supposed to reach back into the other module, why shouldn't other functions be able to? Please explain exactly what the semantics of getattr are, and exactly which modules it is supposed to be able to see. Remember, it is not a compiler construct or an operator. It is a function, and it lives in its own module (the builtins).
Not a rhetorical question: is that how it works in something like Swift, or Kotlin?
I have no idea. I'm just asking how you intend it to work in Python. If you want to cite other languages, go ahead, but I'm not assuming that they already have the solution, because they are different languages. Also not a rhetorical question: Is their getattr equivalent actually an operator or compiler construct, rather than being a regular function? Because if it is, then the entire problem doesn't exist.
So if it behaves exactly the same way that getattr would, then is it exactly the same as fetch2 and fetch4? If not, how is it different? What about other functions implemented in C? If I write a C module that calls PyObject_GetAttr, does it behave as if dot notation were used in the module that called me, or does it use my module's extension methods? You are handwaving getattr a crazy amount of magic here that basically amounts to "do what I want".
You know what I mean. Stop being obtuse. The object has notably different behaviour depending on where you are when you look at it. In every other way in Python, an object is what it is regardless of who's asking - but now this is proposing changing that.
Attribute lookups are done by bytecode, which lives in code objects. You can execute a code object without an associated function, and you can have functions in different modules associated with the same code object. When you run that bytecode, which set of extension methods would it look up? The sanest approach I can think of is that the code object would remember which module it was created in (which is broadly the same as the way PEP 479 does things - although since that's a binary state, it simply sets one flag on the code object).
`getattr` doesn't even have a `__code__` attribute, and neither do partial objects.
Builtin functions don't have bytecode, they have C code, but they'd need an equivalent. Partial objects have a func attribute, which would be where you'd go looking for the code (either a code object or C code). None of this changes the fact that code objects still would need to know their modules. ChrisA

On Wed, Jun 23, 2021 at 03:47:05PM +1000, Chris Angelico wrote:
Correct so far.
Okay. Now, what if getattr is brought into the mix?
To a first approximation (ignoring shadowing) every dot lookup can be replaced with getattr and vice versa: obj.name <--> getattr(obj, 'name') A simple source code transformation could handle that, and the behaviour of the code should be the same. Extension methods shouldn't change that.
In file3's scope, there is no list.unordered method, so any call like some_list.unordered getattr(some_list, 'unordered') will fail, regardless of which list some_list is, or where it was created. That implies that: fetch1(some_list, 'unordered') fetch2(some_list, 'unordered') will also fail. It doesn't matter who is calling the functions, or what module they are called from. What matters is the context where the attribute lookup occurs, which in fetch1 and fetch2 is the file3 scope.
# file4.py from file3 import fetch1, fetch2
Doesn't matter that fetch1 and fetch2 are imported into file4. They are still executed in the global scope of file3. If they called `globals()`, they would see file3's globals, not file4's. Same thing for extension methods.
I think that's going to always return None :-)
In the scope of file4, there is no list method "in_order", but there is a list method "unordered". So some_list.in_order getattr(some_list, 'in_order') will fail. That implies that: fetch3(some_list, 'unordered') fetch4(some_list, 'unordered') will also fail. It doesn't matter who is calling the functions, or what module they are called from. What matters is the context where the attribute lookup occurs, which in fetch3 and fetch4 is the file4 scope. (By the way, I think that your example here is about ten times more obfuscated than it need be, because of the use of generic, uninformative names with numbers.)
Look at the execution context. fetch1(thing, "in_order") and fetch2(thing, "in_order") execute in the scope of file3, where lists have an in_order extension method. It doesn't matter that they are called from file4: the body of the fetchN functions, where the attribute access takes place, executes where the global scope is file3 and hence the extension method "in_order" is found and returned. For the same reason, both fetch1(thing, "unordered") and fetch2(thing, "unordered") will fail. It doesn't matter that they are called from file4: their execution context is their global scope, file3, and just as they see file3's globals, not the callers, they will see file3's extension methods. (I say "the module's extension methods", not necessarily to imply that the extension methods are somehow attached to the module, but only that there is some sort of registry that says, in effect, "if your execution context is module X, then these extension methods are in use".) Similarly, the body of fetch3 and fetch4 execute in the execution context of file4, where list has been extended with an unordered method. So fetch3(thing, "unordered") and fetch4(thing, "unordered") both return that unordered method. For the same reason (the execution context), fetch3(thing, "in_order") and fetch4(thing, "in_order") both fail.
What exactly are the semantics of getattr?
Oh gods, I don't know the exact semantics of attribute look ups now! Something like this, I think: obj.attr (same as getattr(obj, 'attr'): if type(obj).__dict__['attr'] exists and is a data descriptor: # data descriptors are the highest priority return type(obj).__dict__['attr'].__get__() elif obj.__dict__ exists and obj.__dict__['attr'] exists: # followed by instance attributes in the instance dict return obj.__dict__['attr'] elif type(obj) defines __slots__ and there is an 'attr' slot: # then instance attributes in slots if the slot is filled: return contents of slot 'attr' else: raise AttributeError elif type(obj).__dict__['attr'] exists: if it is a non-data descriptor: return type(obj).__dict__['attr'].__get__() else: return type(obj).__dict__['attr'] elif type(obj) defines a __getattr__ method: return type(obj).__getattr__(obj) else: # search the superclass hierarchy ... # if we get all the way to the end raise AttributeError I've left out `__getattribute__`, I *think* that gets called right at the beginning. Also the result of calling `__getattr__` is checked for descriptor protocol too. And the look ups on classes are slightly different. Also when looking up on classes, metaclasses may get involved. And super() defines its own `__getattribute__` to customize the lookups. (As other objects may do too.) And some of the fine details may be wrong. But, overall, the "big picture" should be more or less correct: 1. check for data descriptors; 2. check for instance attributes (dict or slot); 3. check for non-data descriptors and class attributes; 4. call __getattr__ if it exists; 5. search the inheritance hierarchy; 6. raise AttributeError if none of the earlier steps matched. If we follow C# semantics, extension methods would be checked after step 4 and before step 5: if the execution context is using extensions for this class: and 'attr' is an extension method, return that method
You seem to think that getattr being a function makes a difference. Why? Aside from the possibility that it might be shadowed or deleted from builtins, can you give me any examples where `obj.attr` and `getattr(obj. 'attr')` behave differently? Even *one* example? Okay, this is Python. You could write a class with a `__getattr__` or `__getattribute__` method that inspected the call chain and did something different if it spotted a function called "getattr". Congratulations, you are very smart and Python is very dynamic. You might even write a __getattr__ that, oh, I don't know, returned a method if the execution context had opted in to a system that provided extra methods to your class. But I digress. But apart from custom-made classes that deliberately play silly buggers if they see that getattr is involved, can you give an example of where it behaves differently to dot syntax?
I really don't know why you think getattr being a function makes any difference here. It's a builtin function, written in C, and can and does call the same internal C routines used by dot notation.
Okay, let's look at the partial object: >>> import functools >>> f = functools.partial(getattr, [10, 20]) >>> f('index')(20) 1 Partial objects like f don't seem to have anything like a __globals__ attribute that allow me to tell what the execution context would be. I *think* that for Python functions (def or lambda) they just inherit the execution context from the function. For builtins, I'm not sure. I presume their execution context will be the current scope. Right now, I've already spent multiple hours on these posts, and I have more important things to do now than argue about the minutia of partial's behaviour. But if you wanted to do an experiment, you could do something like comparing the behaviour of: # module A.py f = lambda: globals() g = partial(globals) # module B.py from A import f, g f() g() and see whether f and g behave identically. I expect that f would return A's globals regardless of where it was called from, but I'm not sure what g would do. It might very well return the globals of the calling site. In any case, with respect to getattr, the principle would be the same: the execution context defines whether the partial object sees the extension methods or not. If the execution context is A, and A has opted in to use extension methods, then it will see extension methods. If the context is B, and B hasn't opted in, then it won't.
That depends. If you write a C module that calls PyObject_GetAttr right now, is that *exactly* the same as dot notation in pure-Python code? The documentation is terse: https://docs.python.org/3.8/c-api/object.html#c.PyObject_GetAttr but if it is correct that it is precisely equivalent to dot syntax, then the same rules will apply. Has the current module opted in? If so, then does the class have an extension method of the requested name? Same applies to code objects evaluated without a function, or whatever other exotic corner cases you think of. Whatever you think of, the answer will always be the same: - if the execution context is a module that has opted to use extension methods, then attribute access will see extension methods; - if not, then it won't. If you think of a scenario where you are executing code where there is no module scope at all, and all global lookups fail, then "no module" cannot opt in to use extension methods and so the code won't see them. If you can think of a scenario where you are executing code where there are multiple module scopes that fight for supremacy using their two weapons of fear, surprise and a fanatical devotion to the Pope, then the winner will determine the result. *wink* -- Steve

On 2021-06-23 10:21 a.m., Steven D'Aprano wrote:
But if getattr is part of the builtins module, written in C, and the builtins module is calling PyObject_GetAttr, and PyObject_GetAttr is exactly the same as the dot notation... Then getattr is exactly the same as the dot notation, **in the builtins module**! The builtins module doesn't use any extension methods, as it is written in C. As such getattr(foo, "bar") MUST NOT produce the same result as foo.bar if extension methods are at play! (You're still missing the point of extension methods. Do check out our other reply.)

On Wed, Jun 23, 2021 at 11:25 PM Steven D'Aprano <steve@pearwood.info> wrote:
Alright. In that case, getattr() has stopped being a function, and is now a magical construct of the compiler. What happens if I do this? if random.randrange(2): def getattr(obj, attr): return lambda: "Hello, world" def foo(thing): return getattr(thing, "in_order")() Does it respect extension methods or not? If getattr is a perfectly ordinary function, as it now is, then it should be perfectly acceptable to shadow it. It should also be perfectly acceptable to use any other way of accessing attributes - for instance, the PyObject_GetAttr() function in C. Why should getattr() become magical?
Oops, my bad :) Not that it changes anything, given that we care more about whether they trigger AttributeError than what they actually do. (Chomp the rest of the discussion, since that was all based on the assumption that getattr was a normal function.)
That is exactly what's weird about it. Instead of looking up the name getattr and then calling a perfectly ordinary function, now it has to be a magical construct of the compiler, handled right there. It is, in fact, impossible to craft equivalent semantics in a third-party function. Currently, getattr() can be defined in C on top of the C API function PyObject_GetAttr, which looks solely at the object and not the execution context. By your proposal, getattr() can only be compiler magic.
That is *precisely* the possibility. That is exactly why it is magical by your definition, and nonmagical by the current definition. At the moment, getattr() is just a function, hasattr() is just a function. I can do things like this: def ga(obj, attr): return getattr(obj, attr) Or this: ga = getattr Or this: PyObject *my_getattr(PyObject *obj, PyObject *attr) {return PyObject_GetAttr(obj, attr);} But if it's possible to do a source code transformation from getattr(obj, "attr") to obj.attr, then it is no longer possible to do *ANY* of this. You can't have an alias for getattr, you can't have a wrapper around it, you can't write your own version of it. In some languages, this is acceptable and unsurprising, because the getattr-like feature is actually an operator. (For instance, JavaScript fundamentally defines obj.attr as being equivalent to obj["attr"], so if you want dynamic lookups, you just use square brackets.) In Python, that is simply not the case. Are you proposing to break backward compatibility and all consistency just for the sake of this?
But if it's just a builtin function, then how is it going to know the execution context it's supposed to look up attributes in? If getattr(obj, "attr") can be implemented by a third party, show me how you would write the function such that it knows which extension methods to look for. You keep coming back to this assumption that it has to be fundamentally equivalent to obj.attr, but that's the exact problem - there is no way to define a function that can know the caller's context (barring shenanigans with sys._getframe), so it has to be compiler magic instead, which means it is *not a function any more*.
Do you see the problem, then? The partial object has to somehow pass along the execution context. Otherwise, functools.partial(getattr, obj)("attr") won't behave identically to obj.attr.
Correct on both counts - f() naturally has to return the globals from where it is, and g() uses the calling site. Whichever way you do it, somewhere, you're going to have a disconnect between obj.attr and the various dynamic ways of looking it up. It is going to happen. So why are you fighting so hard for getattr() to become magical in this way?
The "current module", logically, would be the extension module.
Multiple scopes can definitely be plausible, but there'll just have to be some definition for which one wins. My guess would be that the function object wins, but if not, the code object should know its own context, and if not that, then there is a null context with no extension methods. But that's nothing more than a guess, and there's no rush on pinning that part down precisely. I am wholeheartedly against this proposal if it means that getattr has to become magical. If, however, getattr simply ignores extension methods, and the ONLY thing changed is the way that dot lookup is done, I would be more able to see its value. (Though not so much that I'd actually be using this myself. I don't think it'd benefit any of my current projects. But I can see its potential value for the language.) ChrisA

On Thu, Jun 24, 2021 at 12:17:17AM +1000, Chris Angelico wrote:
On Wed, Jun 23, 2021 at 11:25 PM Steven D'Aprano <steve@pearwood.info> wrote:
How do you come to that conclusion? Did you miss the part where I said "To a first approximation (ignoring shadowing)"? What I'm describing is not some proposed change, it is the status quo. getattr is equivalent to dot notation, as it always has been, all the way back to 1.5 or older. There's no change there. getattr is a function. A regular, plain, ordinary builtin function. You can shadow it, or bind it to another name, or reach into builtins and delete it, just like every other builtin function. But if you don't do any of those things, then it is functionally equivalent to dot lookup.
According to the result of the random number generator, either the lambda will be returned by the shadowed getattr, or the attribute "in_order" will be looked up on obj. Just like today.
If getattr is a perfectly ordinary function, as it now is, then it should be perfectly acceptable to shadow it.
Correct.
Again, correct.
Why should getattr() become magical?
It doesn't.
I don't think that it is impossible to emulate attribute lookup in pure Python code. It's complicated, to be sure, but I'm confident it can be done. Check out the Descriptor How To Guide, which is old but as far as I can tell still pretty accurate in its description of how attributes are looked up.
The only "magic" that is needed is the ability to inspect the call stack to find out the module being called from. CPython provides functions to do that in the inspect library: `inspect.stack` and `inspect.getmodule`. Strictly speaking, they are not portable Python, but any interpreter ought to be able to provide analogous abilities. Do you think that the functions in the gc library are "compiler magic"? It would be next to impossible to emulate them from pure Python in an interpeter-independent fashion. Some interpreters don't even have reference counts. How about locals()? That too has a privileged implementation, capable of doing things likely impossible from pure, implementation-independent Python code. Its still a plain old regular builtin function that can be shadowed, renamed and deleted.
And it will remain the possibility.
That is exactly why it is magical by your definition
It really won't.
In the absence of any shadowing or monkey-patching of builtins. -- Steve

Oh this is a long one. Hypothetically, let's say you have a proxy object: class Foo: def __getattribute__(self, thing): return getattr(super().__getattribute__(self, "proxied"), thing) Should this really include extension methods into it by default? This is clearly wrong. The local override for the LOAD_ATTR opcode should NOT apply to proxy methods except where explicitly requested. Also sometimes you are supposed to call the dunder directly, like in the above example. It's not *bad* to do it if you know what you're doing. The point is that the caller using your proxy object should opt-in to the extension methods, rather than break with no way to opt-out of them. Your extension methods shouldn't propagate to proxy objects. To go even further, should all your class definitions that happen to extend a class with in-scope extension methods automatically gain those extension methods? Because with actual extension methods, that doesn't happen. You can have class MyList(list): pass and other callers would not get MyList.flatten even with you being able to use MyList.flatten locally. Extension methods are more like Rust traits than inheritance-based OOP. Also note that they use instance method syntax, but no other. That is they apply to LOAD_ATTR opcodes but should not apply to getattr! (Indeed, reflection in C#/Kotlin doesn't see the extension methods!) On 2021-06-22 6:57 a.m., Steven D'Aprano wrote:

On Tue, Jun 22, 2021 at 08:44:56AM -0300, Soni L. wrote:
By default? Absolutely not. Extension methods are opt-in.
This is clearly wrong.
What is clearly wrong? Your question? A "yes" answer? A "no" answer? Your proxy object? Soni, and Chris, you seem to be responding as if extension methods are clearly, obviously and self-evidently a stupid idea. Let me remind you that at least ten languages (C#, Java, Typescript, Oxygene, Ruby, Smalltalk, Kotlin, Dart, VB.NET and Swift) support it. Whatever the pros and cons of the technique, it is not self-evidently wrong or stupid.
The local override for the LOAD_ATTR opcode should NOT apply to proxy methods except where explicitly requested.
Why not? Let me ask you this: - should proxy objects really include `__slots__` by default? - should they really include dynamic attributes generated by `__getattr__`? - should they include attributes in the inheritence hierarchy? - why should extension methods be any different? Let's step back from extension methods and consider a similar technique, the dreaded monkey-patch. If I extend a class by monkey-patching it with a new method: import library library.Klass.method = patch_method would you expect that (by default) the patched method are invisible to proxies of Klass? Would you expect there to be a way to "opt-out" of proxying that method? I hope that your answers are "No, and no", because if either answer is "yes", you will be very disappointed in Python. Why should extension methods be different from any other method? Let's go through the list of methods which are all treated the same: - methods defined on the class; - methods defined on a superclass or mixin; - methods added onto the instance; - methods created dynamically by `__getattr__`. (Did I miss any?) And the list of those which are handled differently, with ways to opt-out of seeing them: - ... um... er... Have I missed any?
The point is that the caller using your proxy object should opt-in to the extension methods, rather than break with no way to opt-out of them.
You opt-out by not opting in.
Your extension methods shouldn't propagate to proxy objects.
Fundamentally, your proxy object is just doing attribute lookups on another object. If you have a proxy to an instance `obj`, there should be no difference in behaviour between extension methods and regular methods. If `obj.method` succeeds, so should `proxy.method`, because that's what proxies do. The origin of obj.method should not make any difference. I'm sorry to have to keep harping on this, but it doesn't matter to the proxy whether the method exists in the instance `__dict__`, or the class `__dict__`, or `__slots__`, or a superclass, or is dynamically generated by `__getattr__`. A method is a method. Extension methods are methods.
We might want to follow the state of the art here, assuming there was consensus in other languages about inheriting extension methods. But I would expect this behaviour: # --- extensions.py library --- @extends(list) def flatten(self): ... # --- module A.py --- uses extensions # opt-in to use the extension method class MyListA(list): pass MyListA.flatten # inherits from list # --- module B.py --- class MyListB(list): pass MyListB.flatten # raises AttributeError However, there may be factors I haven't considered.
I understand that Rust doesn't support inheritance at all, and that Rust traits are more like what everyone else calls "interfaces".
Okay, that's a good data point. The question is, why doesn't reflection see the extension methods? That will help us decide whether that's a limitation of reflection in those languages, or a deliberate design feature we should follow. A brief search suggests that people using C# do want to access extension methods via reflection, and that there are ways to do so: https://duckduckgo.com/?q=c%23+invoke+extension+method+via+reflection -- Steve

On Wed, Jun 23, 2021 at 6:25 PM Steven D'Aprano <steve@pearwood.info> wrote:
It's not self-evidently wrong or stupid. But the semantics, as given, don't make sense. Before this could ever become part of the language, it will need some VERY well-defined semantics. (Preferably, semantics that don't depend on definitions involving CPython bytecode, although I'm fine with it being described like that for the time being. But ultimately, other Pythons will have to be able to match the semantics.) You have been saying certain things as if they are self-evidently right, without any justification or explanation.
They are different because they are *context-sensitive*. Every other example you have given is attached to the object itself. The object defines whether it has __slots__, __dict__, a __getattr__ method, a __getattribute__ method, and superclasses. The object defines, in those ways, which attributes can be looked up, and it doesn't matter how you ask the question, you'll get the same answer. Calling getattr(obj, "thing") is the same as obj.thing is the same as PyObject_GetAttr(ptr_to_obj, ptr_to_string_thing) is the same as any other way you would look it up. Extension methods change that. Now it depends on which module you are in. That means you're either going to have to forfeit these consistencies, or they are going to need to figure out WHICH module you are working with. The simplest definition is this: Extension methods apply *only* to dot notation here in the current module. Every piece of code compiled in this module will look up dotted attributes using extensions active in this module. (In CPython terms, that affects the behaviour of LOAD_ATTR only, and would mean that the code object retains a reference to that module.) That's pretty reasonable. But to accept this simple definition, you *must* forfeit the parallel with getattr(), since getattr() is defined in the builtins, NOT in your module. Yet you assert that, self-evidently, getattr(obj, "thing") MUST be the same as obj.thing, no matter what.
For my part, I absolutely agree with you - the proxy should see it. But that's because the *object*, not the module, is making that decision.
Yes, all things that are defined by the object, regardless of its context.
Nope. Python currently has a grand total of zero ways to have attributes whose existence depends on the module of the caller. (Barring shenanigans with __getattr__ and sys._getframe. Or ctypes. I think we can all agree that that sort of thing doesn't count.)
By definition, extension methods are methods in one module, and not methods in another module.
Yes, I'd definitely like to know this too.
The answers appear to be bypassing the extension method and going for the concrete function that underlies it. That seems perfectly reasonable, but it's basically an acknowledgement that extension methods don't show up in these kinds of ways. So... is it really so self-evident that getattr(obj, "thing") HAS to be the same as obj.thing ? ChrisA

On 2021-06-23 5:21 a.m., Steven D'Aprano wrote:
We're saying including local extension methods into the proxy object's attribute lookup is wrong.
That's funny because we (Soni) have been arguing about and pushing for a specific implementation of them. We're not opposed to them, quite the opposite we even have an implementation we'd like to see, altho we don't really see much of a use for them ourselves.
Why shouldn't extension methods be different from monkey-patching? If they were to be the same, why call them something different?
Extension methods are functions and are scoped like other functions.
Yes, and where they do so, they have to explicitly re-opt-in to it. And that's a good thing, because it gives you more flexibility - you're not *forced* to shadow any object methods with your extension methods when using reflection, and this can be useful when you wanna e.g. use extension methods for your own benefit and still write a programming language interpreter that integrates with the host language using reflection. The "obvious" thing is that reflection only cares about the object(s), but not the context they're in. If you want to bring in the context, you need to bring it in yourself.

On Tue, Jun 22, 2021 at 10:40 AM Steven D'Aprano <steve@pearwood.info> wrote:
Hmm, that's not what I'd usually understand "encapsulation" to mean. That's what would normally be called "namespacing". "... encapsulation refers to the bundling of data with the methods that operate on that data, or the restricting of direct access to some of an object's components." https://en.wikipedia.org/wiki/Encapsulation_(computer_programming)
I don't think it's safer necessarily. With this proposal, we have the notion that obj.method() can mean two completely different things *at the same time* and *on the same object* depending on how you refactor the code. # file1.py from file2 import func # and apply some extension methods def spamify(obj): print(obj.method()) print(func(obj)) # file2.py def func(obj): return obj.method() Is that really beneficial? All I'm seeing is myriad ways for things to get confusing - just like in the very worst examples of monkey-patching. And yes, I have some experience of monkey-patching in Python, including a situation where I couldn't just "import A; import B", I had to first import a helper for A, then import B, and finally import A, because there were conflicting monkey-patches. But here's the thing: extension methods (by this pattern) would not have solved it, because the entire *point* of the monkey-patch was to fix an incompatibility. So it HAD to apply to a completely different module. That's why, despite its problems, I still think that monkey-patching is the cleaner option. It prevents objects from becoming context-dependent.
And the Ruby community is starting to see the risks of monkey-patching. (There's a quiz floating around the internet - "Ruby or Rails?" - that brings into sharp relief the incredibly far-reaching effects of using Rails. It includes quite a few methods on builtin objects.) So I am absolutely fine with being conservative. We have import hooks and MacroPy. Does anyone use them in production? I certainly don't - not because I can't, but because I won't without a VERY good reason.
I'm not sure why attaching attributes to functions is frowned upon; I'd personally make very good use of this for static variables, if only I could dependably refer to "this_function". But risk-averse is definitely preferable to the alternative. It means that Python is a language that can be learned as a whole, rather than being fragmented into "the NumPy flavour of Python" and "the Flask flavour of Python" and so on, with their own changes to the fabric of the language. So far, I'm not seeing anything in extension methods to make me want to change that stance. ChrisA

On Tue, Jun 22, 2021 at 05:50:48PM +1000, Chris Angelico wrote:
Hmm, that's not what I'd usually understand "encapsulation" to mean. That's what would normally be called "namespacing".
Pfft, who you going to believe, me or some random folx on the internet editing Wikipedia? *wink* Okay, using the Wikipedia/OOP definition still applies. Presumably most extension methods are going to be regular instance methods or class methods, rather than staticmethod. So they will take a `self` (or `cls`) parameter, and presumably most such extension methods will actually act on that self parameter in some way. There is your "bundling of data with the methods that operate on that data", as required :-) The fact that the methods happen to be written in an separate file, and (in some sense) added to the class as extension methods, is neither here nor there. While we *could* write an extension method that totally ignored `self` and instead operated entirely on global variables, most people won't -- and besides, we can already do that with regular methods. class Weird: def method(self, arg): global data, more_data, unbundled_data, extra_data del self # don't need it, don't want it do_stuff_with(data, more_data, unbundled_data, extra_data) So the point is that extension methods are no less object-orientey than regular methods. They ought to behave just like regular methods with respect to encapsulation, namespacing, inheritance etc, modulo any minor and necessary differences. E.g. in C# extension methods can only extend a class, not override an existing method. [...]
Yes, we can write non-obvious code in any language, using all sorts of "confusing" techniques, especially when you do stuff dynamically. class K: def __getattr__(self, attrname): if attrname == 'method': if __name__ == '__main__': raise AttributeError return something() Regarding your example, you're only confused because you haven't take on board the fact that extension methods aren't interpreter-global, just module-global. Because it's new and unfamiliar. But we can do exactly the same thing, right now, with functions instead of methods, and you will find it trivially easy to diagnose the fault: # file1.py from file2 import func # instead of "applying an extension method" from elsewhere, # import a function from elsewhere from extra_functions import g def spamify(obj): print(g(obj)) # works fine print(func(obj)) # fails # file2.py def func(obj): return g(obj) # NameError This example looks easy and not the least bit scary to you because you've been using Python for a while and it has become second nature to you. But you might remember back when you were a n00b, it probably confused you: why doesn't `g(obj)` work when you imported it? How weird and confusing! What do you mean, if I want to use g, I have to import it in each and every module where I want to use it? That's just dumb. Importing g once should make it available EVERYWHERE, right? Been there, done that. You learned about modules and namespaces, and why Python's design is *safer and better* than a single interpreter-global namespace, and now that doesn't confuse you one bit. And if you were using Kotlin, or C#, or Swift, or any one of a number of other languages with extension methods, you would likewise learn that extension methods work in a similar fashion. Why does obj.method raise AttributeError from file2? *Obviously* its because you neglected to "apply the extension method", duh. That's as obvious as neglecting to import something and getting a NameError. Maybe even more obvious, if your IDE or linter knows about extension methods. And its *safer and better* than monkey-patching. We have two people in this thread who know Kotlin and C#, at least one of them is a fan of the technique. Why don't we ask them how often this sort of error is a problem within the Kotlin and C# communities?
Sure. Nobody says that extension methods are a Silver Bullet that cures all programming ills. Some things will need a monkey-patch. Python is great because we have a rich toolbox of tools to choose from. To extend a class with more functionality at runtime, we can: - monkey-patch the class; - subclass it; - single or multiple inheritance; - or a virtual subclass; - or use it as a mixin or a trait (with third-party library support); - use delegation and composition; - or any one of a number of Design Patterns; - add methods onto the instance to override the methods on the class; - swizzling (change the instance's class at runtime to change its behaviour); - just write a function. Have I missed anything? Probably. None of those techniques is a silver bullet, all of them have pros and cons. Not all of the techniques will work under all circumstances. We should use the simplest thing that will work, for whatever definition of "work" we need for that task. Extension methods are just another tool in the tool box, good for some purposes, not so good for others.
It might be a necessary thing under rather usual circumstances, but under the great bulk of circumstances, it is a bad thing. Chris, here you are defending monkey-patching, not just as a necessary evil under some circumstances, but as a "cleaner" option, and then in your very next sentence:
And the Ruby community is starting to see the risks of monkey-patching.
Indeed. -- Steve

On Tue, Jun 22, 2021 at 9:23 PM Steven D'Aprano <steve@pearwood.info> wrote:
Okay, that's fair. Granted. It's not ALL of encapsulation, but it is, to an extent encapsulation. (It is also namespacing, and your justification of it was actually a justification of namespacing; but this is Python, and I think we all agree that namespacing is good!)
Fair point. However, I've worked with a good number of languages that have some notion of object methods, and generally, an object has or doesn't have a method based on what the object *is*, not on who's asking. It's going to make for some extremely confusing results. Is getattr() going to act as part of the builtins module or the module that's calling it? What about hasattr? What about an ABC's instance check, or anything else? How do other languages deal with this? How do they have a getattr-like function? Does it have to be a compiler construct?
Yes. That is exactly right. I am claiming that monkey-patching is, in many MANY cases, a cleaner option than extension methods. And then I am saying that monkey-patching is usually a bad thing. This is not incompatible, and it forms a strong view of my opinion of extension methods. ChrisA

On 2021-06-22 05:14, Chris Angelico wrote:
I agree, and this is the aspect of the proposal that most confuses me. I still can't understand concretely what is being proposed, though, so I'm not sure I even understand it. Can someone clarify? Suppose I have this ***** ### file1.py @extend(list) def len2(self): return len(self)**2 ### file2.py # or whatever I do to say "I want to use extensions to list defined in file1" from file1 extend list def coolness(some_list): return some_list.len2() + 1 my_list = [1, 2, 3] print("My list len2:", my_list.len2()) print("My list coolness:", coolness(my_list)) ### file3.py import file2 other_list = [1, 2, 3, 4] print("Other list len2:", other_list.len2()) print("other list coolness:", file2.coolness(other_list)) print("My list len2 from outside:", file2.my_list.len2()) print("My list coolness from outside:", file2.coolness(file2.my_list)) ***** What exactly is supposed to happen here if I run file3? file2 declares use of file1's extensions. file2 does not. But file3 uses a function in file2 that makes use of such extensions. Who sees the extension? The list object my_list in file2 is the same object accessed as file2.my_list in file3. Likewise coolness and file2.coolness. It is going to be super confusing if calling the same function object with the same list object argument gives different results depending on which file you're in. Likewise it's going to be confusing if the same list object sometimes has a .len2 method and sometimes doesn't. But if it doesn't work that way, then it would seem to mean either every module sees the extensions (even if they didn't opt in), or else my_list in file2 is not the same object as file2.my_list in file3. And that would be even worse. (In this example it may seem okay because you can ask why I would call len2 from file3 if I didn't want to use it. But what if the extension is an override of an existing method? Is that not allowed?) In addition, if there is a difference between my_list and other_list, then that apparently means that the syntax for lists now does something different in the two files. This is maybe the most reasonable approach, since it's at least remotely reminiscent of a __future__ import, which changes syntactic behavior. But what exactly is the difference between the two objects here? Are both objects lists? If they are, then how can they have different methods? If they're not, then what are they? Most __future__ imports don't work like this. Maybe the closest thing is the generator_stop one, but at least that places a flag on the code object to indicate the difference. Would "extended lists" have some kind of magic attribute indicating which extensions they're using? That may have been marginally acceptable in the case of PEP 479, which was essentially a bugfix, and set the attribute on code objects which are an obscure internal data structure. But allowing this kind of thing for "user-facing" objects like lists would create a profusion of different list objects with different behavior depending on some combination of attributes indicating "what extends me" --- or, even worse, create different behavior without any such overt indication of which extensions are in use for a given object. The idea that the file in which code is written would somehow determine this type of runtime behavior seems to me to break my assumption that by knowing an object's identity I should have all the information I need to know about how to use it. Some of the posts earlier in this thread seem to suggest that somehow the module where something was defined (something --- not sure what --- maybe the object with the extended method? maybe the extended method itself?) would somehow get a hook to override attribute access on some objects (again, not sure which objects). That to me is the exact opposite of encapsulation. Encapsulation means the object itself contains all its behavior. If there is some getattr-like hook in some other module somewhere that is lying in wait to override attribute access on a given object "only sometimes" then that's not encapsulation at all. It's almost as bad as the infamous COME FROM statement! Existing mechanisms like __getattribute__ are not parallel at all. When you know an object's identity, you know its MRO, which tells you all you need to know about what __getattribute__ calls might happen. You don't need to know anything about where the object "came from" or what file you're using it in. But it seems with this proposal you would need to know, and that's kind of creepy to me. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On 2021-06-22 3:43 p.m., Brendan Barnwell wrote:
NameError, value, NameError, value, respectively.
It isn't the list object that has the extension method.
Think about it like this, extension methods give you the ability to make imported functions that look like this: foo(bar, baz) look like this instead: bar.foo(baz) That's all there is to them. They're just a lie to change how you read/write the code. Some languages have an whole operator that has a similar function, where something like bar->foo(baz) is sugar for foo(bar, baz). The OP doesn't specify any particular mechanism for extension methods, so e.g. making the dot operator be implemented by a local function in the module, which delegates to the current attribute lookup mechanism by default, would be perfectly acceptable. It's like deprecating the existing dot operator and introducing a completely different one that has nothing to do with attribute lookup!

On 2021-06-22 5:23 p.m., Chris Angelico wrote:
Sure! As long as the new one can call getattr! Let's say the new dot operator looks like this: # file1.py def __dot__(left, right): print(left) print(right) ... foo = [] foo.bar Now, this would actually print the list [] and the string "bar". Then you can just use getattr to get attribute lookup behaviour out of it! def __dot__(left, right): return getattr(left, right) foo = [] foo.bar It would have local scope, similar to uh... locals. Y'know how locals are just sugar for locals()['foo'] and stuff? Yeah.

On Wed, Jun 23, 2021 at 6:41 AM Soni L. <fakedme+py@gmail.com> wrote:
Not really, no, they're not. :) The dictionary returned by locals() isn't actually an implementation detail of local name lookups. Have you put any thought into how you would deal with the problem of recursive __dot__ calls? ChrisA

On 2021-06-22 5:54 p.m., Chris Angelico wrote:
It's... part of the language. Not an implementation detail. The dictionary returned by locals() is an inherent part of local name lookups, isn't it?
Have you put any thought into how you would deal with the problem of recursive __dot__ calls?
Let it recurse! Globals and locals don't go through __dot__, so you can just... use them. In particular, you can always use getattr(), and probably should. Or even set __dot__ to getattr inside it, like so: def __dot__(left, right): __dot__ = getattr foo.bar # same as getattr(foo, "bar") because we set (local) __dot__ to getattr above In languages with lexical scoping (instead of block scoping), the compiler doesn't see things that haven't yet been declared. In those languages, such a __dot__ function would actually inherit the global __dot__ rather than recursing. But as you can see from the above example, it's really not a big deal.

On Wed, Jun 23, 2021 at 8:30 AM Soni L. <fakedme+py@gmail.com> wrote:
No, it's not. Most definitely not. https://docs.python.org/3/library/functions.html#locals
I can't actually pin down what I'm averse to here, but it gives me a really REALLY bad feeling. You're expecting every attribute lookup to now look for a local or global name __dot__ (or, presumably, a nonlocal, class, or builtin), and do whatever that does. That seems like a really effective foot-gun. Have you actually tried designing this into a larger project to see what problems you run into, or is this something you've only considered at this trivial level? ChrisA

On 2021-06-22 7:38 p.m., Chris Angelico wrote:
Ohh. Fair enough, sorry.
1. It's opt-in. 2. It's designed to be used by a hypothetical extension methods module, but without imposing any design constraints on such module. It could return a named function every time a given name is looked up (a la "bind the first argument" operator), or do dynamic dispatch based on types or ABCs (a la proper extension methods). In practice, you don't def your own __dot__, but rather use someone else's "__dot__ builder". If you don't wanna deal with it, just don't use __dot__. It's also useful for the occasional domain-specific language.

On 2021-06-22 13:09, Soni L. wrote:
Okay, if that's the case, then I just think it's a bad idea. :-) We already have a definition for what bar.foo does, and it's totally under the control of the bar object (via the __getattr__/__getattribute__ mechanism). The idea that other things would be able to hook in there does not appeal to me at all. I don't really understand why you would want such a thing, to be honest. I feel it would make code way more difficult to reason about, as it would break locality constraints every which way. Now every time you see`bar.foo` you would have to think about all kinds of other modules that may be hooking in and adding their own complications. What's the point? Mostly the whole benefit of the dot notation is that it specifies a locally constrained relationship between the object and the attribute: you know that bar.foo does what bar decides, and no one else gets any say (unless bar asks for their opinion, e.g. by consulting global variables or whatever). If we want to write foo(bar, baz). . . well, we can just do that! What you're describing would just make existing attribute usages harder to understand while only "adding" something we can already do quite straightforwardly. Imagine a similar proposal for other syntax. Suppose that in any module I could define a function called operator_add and then other modules could "import" this "extension" so that every use of the + operator would somehow hook into this operator_add function. So now every time you do 2 + 2 you might be invoking some extension behavior. In my view that is unambiguously a road to madness, and as far as I can tell the extension mechanism you're proposing is equally ill-advised. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On 2021-06-22 5:34 p.m., Brendan Barnwell wrote:
Imagine if Python didn't have an + operator, but instead an + *infix function*. Thus, every module would automatically include the global def infix +(left, right): ... And indeed, you could say we already have this. Except currently you can't define your own local infix +. But what if you *could*? What if you could just, # file1.py def infix +(left, right): return left << right x = 4 + 4 # file2.py def infix +(left, right): return left ** right x = 4 + 4 # file3.py import file1 import file2 print(file1.x) # 64 print(file2.x) # 256 print(4 + 4) # 8 How does this break locality? Same idea with the dot operator, really. (Some languages don't have operators, but only functions. They let you do just this.)

On 2021-06-22 15:35, Soni L. wrote:
Then that would be bad. All the examples you give seem bad to me. They just make the code more confusing. Python combines various paradigms, but I think one way in which it very smoothly leverage object orientation is by making objects the locus of so much behavior. Operator overloads are defined at the object level, as are "quasi-operator" overloads for things like attribute lookup, iteration, context managers, etc. This means that the semantics of an expression are, by and large, determined by the types of the objects in that expression. What you're describing is basically moving a lot of that out to the module level. Now instead of operator overloads being governed by objects, they'd be governed by the module in which the code appears. The semantics of an expression would be determined not (only) by the types of the objects involved, but also by the module in which the expression textually occurs. There aren't many things in Python that work this way. Future imports are the main one, but those are rare (and rightly so). The import machinery itself provides some possibility for this (as used by stuff like macropy) but is mostly not used for such things (and again rightly so). Beyond that, I just think this kind of thing is a bad idea. Objects naturally cross module boundaries, in that an object may be created in one module and used in many other modules. It is good for an object's behavior to be consistent across modules, so that someone using an object (or readng code that uses an object) can look at the documentation for that object's type and understand how it will work in any context. It is good for code to be understandable in a "bottom up" way in which you understand the parts (the objects, expressions, syntactic structures, etc.) and can combine your understanding of those parts to understand the whole. It is bad for an object to shapeshift and do different things in different contexts. It is bad for code to heavily depend on "top down" information that requires you to know "where you are" (in one module or another) to understand how things work. That increases cognitive burden and makes code more difficult to understand. Personally I'm opposed to anything that moves in that direction. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On Wed, Jun 23, 2021 at 6:49 PM Brendan Barnwell <brenbarn@brenbarn.net> wrote:
Future imports are generally about syntax. They only very seldom affect execution, and when they do, there has to be a mechanic like a flag on the code object (as generator_stop does). In order to maintain compatibility between modules with and without the directive, the execution has to be independent of that. (For example, Py2's "from __future__ import division" changes the bytecode created from the division operator, and barry_as_FLUFL generates the same code for "<>" that normal mode generates for "!=".) There's no fundamental problem with having things defined per-module. You can override the print function for the current module, and other modules aren't affected. I think that per-module operator overloading would be extremely confusing, but mainly because type-free operator overloading is an inherently confusing thing to do. Python allows an object to override operators, and C++ allows you to write a function called "operator+" that takes two arguments, but the types of those arguments is what makes everything make sense. That usually removes the problem of recursion because the implementation of operator+(my_obj, my_obj) is usually going to require addition of simpler types like integers. ChrisA

On Tue, Jun 22, 2021 at 11:43:09AM -0700, Brendan Barnwell wrote:
Right, as far as code executing within the "file2" scope is concerned, lists have a "len2" method.
The call to other_list.len2 would fail with AttributeError, because the current execution scope when the attribute is looked up is file3, not file2. The call to file2.coolness() would succeed, because the function coolness is executed in the scope of file2, which is using the extension method. This is similar to the way name lookups work: # A.py x = 1 def func(): print(x) # This always looks up x in A's scope. Obviously calling func() in its own module will succeed. But if you call func() from another module, it doesn't look up "x" in the caller's module, it still refers back to the module it was defined. Namely A. # B.py x = 9999 from A import func func() # prints 1, not 9999 This is standard Python semantics, and you probably don't even think about it. Extension methods would work similarly: whether the attribute name is visible or not depends on which scope the attribute lookup occurs in, not who is calling it. So if you can understand Python scoping, extension methods will be quite similar. Remember: - global `x` in A.py and global `x` in B.py are different names in different scopes; - functions remember their original global scope and lookup names in that, not the caller's global scope. Attribute lookups involving extension methods would be similar: - whether the extension method is seen or not depends on the which scope the look up occurs, not where the caller is. That lets us see what will happen in file3.py here:
print("My list len2 from outside:", file2.my_list.len2())
It doesn't matter where the list comes from. What matters is where the attribute access occurs. In the first line there, the attribute access occurs in the scope of file3, so it fails. It doesn't matter whether you write `other_list.len2` or `file2.my_list.len2`, both are equivalent to: - get a list instance (file2.my_list, but it could be anything) - doesn't matter where the instance comes from - look up the method "len2" on that instance **in the file3 scope** Hence it fails. The line that follows:
print("My list coolness from outside:", file2.coolness(file2.my_list))
does this: - get a list instance (file2.my_list) - pass it to file2.coolness - which looks up the method "len2" **in the file2 scope** and hence it succeeds.
"It is going to be super confusing if calling the same function with the same *name* gives different results depending on which file you're in." -- me, 25 years ago or so, when I first started learning Python And probably you to. Attribute lookups are just another form of name lookup. Name lookups depend on the current execution scope, not the caller's scope. With extension methods, so do attribute lookups. If you can cope with name lookups, you can cope with extension methods. [...]
But what if the extension is an override of an existing method? Is that not allowed?)
In C#, you can define extension methods with the same name as an existing method, but they will never be seen. The extension methods are only used if the normal method lookup fails.
In addition, if there is a difference between my_list and other_list,
No, absolutely not. It isn't two different sorts of list. Here is some pseudo-code that might help clarify the behaviour. _getattr = getattr # original that we know and love def getattr(obj, name): # called on obj.name # New, improved version try: return _getattr(obj, name) except AttributeError: if current execution scope is using extensions for type(obj): return extension_method(type(self), name) Think of that as a hand-wavy sketch of behaviour, not an exact specification.
Are both objects lists? If they are, then how can they have different methods?
Here's another sketch of behaviour. class object: def __getattr__(self, name): # only called if the normal obj.name fails if current execution scope is using extensions for type(self): return extension_method(type(self), name) There's nothing in this proposal that isn't already possible. In fact, I reckon that some of the big frameworks like Django and others probably already do stuff like this behind the scene. The `extension_method(type, name)` look up could be nothing more than a dictionary keyed with types: {list: {'flatten': <function at 0x123456abcd>, ...}, int: { ... }, } I'm not really sure how to implement this test: if current execution scope is using extensions for type(self) I have some ideas, but I'm not sure how viable or fast they would be.
And yet the legions of people using C#, Java, Swift, TypeScript, Kotlin, and others find it invaluable. One of the most popular libraries in computing, LINQ, works through extension methods. COME FROM was a joke command in a joke language, Intercal. Nobody really used it except as a joke. For you to compare a powerful and much-loved feature used by millions of programmers to COME FROM is as perfect a demonstration of the Blub factor in action. It may be that there are technical reasons why extension methods are not viable in Python, but "it's almost as bad as COME FROM" is just silly. Have a bit of respect for the people who designed extension methods, implemented them in other languages, and use them extensively. They're not all idiots. And for that matter, neither are Python programmers. Do we really believe that Python programmers are too dim witted to understand the concept of conditional attribute access? "Descriptors, async, multiple inheritance, comprehensions, circular imports, import hooks, context managers, namespaces, threads, multiprocessing, generators, iterators, virtual subclasses, metaclasses, I can understand all of those, but *conditional attribute access* makes my brain explode!!!" I don't believe it for a second.
Existing mechanisms like __getattribute__ are not parallel at all.
I just demonstrated that it would be plausible to implement extension methods via `__getattr__`. Perhaps not efficiently enough to be useful, perhaps not quite with the semantics desired, but it could be done. -- Steve

On 2021-06-23 03:02, Steven D'Aprano wrote:
But that's the thing, they aren't. You gave a bunch of examples of lexical scope semantics with imports and function locals vs globals. But attribute lookups do not work that way. Attribute lookups are defined to work via a FUNCTION CALL to the __getattribute__ (and thence often the __getattr__) of the OBJECT whose attribute is being looked up. They do not in any way depend on the name via which that object is accessed. Now of course you can say that you want to make a new rule that throws the old rules out the window. We can do that for anything. We can define a new rule that says now when you do attribute lookups it will call a global function called attribute_lookups_on_tuesdays if it's a Tuesday in your timezone. But what I'm saying is that the way attribute lookups currently work is not the same as the way bare-name lookups work, because attribute lookups are localized to the object (not the name!) and bare-name lookups are not. I consider this difference fundamental to Python. It's why locals() isn't really how local name lookups work (which came up elsewhere in this thread). It's why you can't magically hook into "x = my_obj" and create some magical behavior that depends on my_obj. Attribute lookups are under the control of the object; they come after the scope-based name resolution is all over with and they don't use the scope-based rules. As for other languages, you keep referencing them as if the fact that something known as "extension methods" exists in those other languages makes it self-evident that it would be useful in Python. Python isn't those other languages. I'm not familiar with all of the other languages you mentioned, but I'll bet that at least some of them do not have the same name/attribute lookup rules and dunder-governed object-customization setup as Python. So that's the difference. The fact that extension methods happen to exist and be useful in those languages is really neither here nor there. The attribute lookup procedure you are proposing is deeply inconsistent with the way Python currently does attribute lookup and currently does other things (like operator overloading), and doesn't fit into Python's overall mechanism of object-based hooks. A spatula attachment may be useful on a kitchen mixer; that doesn't mean it's a good idea to add one to your car's dashboard. Apart from that, I will say that I also don't generally assume that because other languages have a feature it's good or worth considering. Some languages are better designed than others. I think Python is a very well designed language. Certainly we can learn from other languages, but even apart from the issues of "fit" that I describe above, the mere fact that some feature is available or even considered useful in other languages doesn't by itself even convince me it's a good idea at all. It could just be a mistake. We need to specifically show that this will make writing and/or reading code easier and better in Python, and I think this proposal would do the opposite, making code harder to read. -- Brendan Barnwell "Do not follow where the path may lead. Go, instead, where there is no path, and leave a trail." --author unknown

On Wed, Jun 23, 2021 at 11:22:26AM -0700, Brendan Barnwell wrote:
Of course attribute lookups are another form of name lookup. One hint that this is the case is that some languages, such as Java, call attributes *variables*, just like local and global variables. Both name and attribute lookups are looking up some named variable in some namespace. This shouldn't be controversial. When you look up bare name: x the interpreter follows the LEGB rule and looks for `x` in the local scope, the enclosing scope, the global scope and then the builtin scope. There are some complications to do with locals and class scopes, and comprehensions, etc, but fundamentally you are searching namespaces for names. Think of the LEGB rule as analogous to an MRO. When you look up a dotted name: obj.x the interpreter looks for `x` in the instance scope, the class scope, and any superclass scopes. The detailed search rules are different, what with decorators, inheritance, etc, the MRO could be indefinitely long, and there are dynamic attributes (`__getattr__`) too. But fundamentally, although the details are different, attribute access and name lookup are both looking up names in namespaces.
There's that weird obsession with "function call" again. Why do you think that makes a difference?
They do not in any way depend on the name via which that object is accessed.
Ummmm... yes? Why is that relevant?
Now of course you can say that you want to make a new rule that throws the old rules out the window.
That's pure FUD. Extension methods don't require throwing the old rules out the window. The old rules continue to apply.
If you want to do that then go ahead and propose it in another thread, but I don't want anything like that strawman.
You might have heard of something called "inheritance". And dynamic attributes. Maybe even mixins or traits. Attribute lookups are not localised to the object.
(not the name!) and bare-name lookups are not. I consider this difference fundamental to Python.
There are plenty of differences between name lookups and attribute lookups, but locality is not one of them.
It's why locals() isn't really how local name lookups work (which came up elsewhere in this thread).
I don't see the relevance to extension methods.
I don't see the relevance to extension methods. You seem to be just listing random facts as if they were objections to the extension method proposal. You're right, we can't hook into assignment to a bare name. So what? We *can* hook into attribute lookups.
I think that's a difference that makes no difference. What if I told you that it is likely that Python's name/attribute lookup rules and dunder-governed object-customization are the key features that would make extension methods possible with little or no interpreter support? As I described in a previous post, adding extension methods would be a very small change to the existing attribute lookup rules. The tricky part is to come up with a fast, efficient registration system for determining when to use them.
The fact that extension methods happen to exist and be useful in those languages is really neither here nor there.
Extension methods don't just "happen to exist", they were designed to solve a real problem. That's a problem that can apply to Python just as much as other languages. Your argument here sounds like Not Invented Here. "Sure they're useful, in *other* (lesser) languages, not in Python! We never have any need to extend classes with new methods -- apart from all the times we do, but they don't count."
"Deeply inconsistent" in what way? `__getattr__` can already do **literally anything** on attribute lookups? Anything that you can do in Python, you can do in a `__getattr__` method. Including adding arbitrary new methods.
Fortunately this is not a proposal to add spatulas to car dashboards. It is a proposal to add a mechanism to extend classes with extra functionality -- an extremely common activity in OOP. [...]
I think this proposal would do the opposite, making code harder to read.
Okay. Here's a method call with a regular method: obj.method(arg) # easy to read, understandable Here's a method call with a extension method: obj.method(arg) # unreadable, incomprehensible garbage Yes, you're absolutely right, the second is *so much harder to read*. Seriously, there's a time to realise when arguments against a feature devolve down to utterly spurious claims that Python programmers are idiots who will be confused by: from extensions use flatten mylist.flatten() but can instantly understand: from extensions import flatten flatten(mylist) If you can understand importing functions, you can understand using extension methods. If anything, extension methods are simpler: there's not likely to be all the complications of: - circular imports, which are tricky even for seasoned Pythonistas; - differences between standard import and import from, which often trips beginners up; - absolute and relative imports; - regular and namespace packages; and other quirks of importing. Compared to all of those, extension methods are likely to be simple and straightforward. -- Steve

On 2021-06-24 20:59:31, Steven D'Aprano wrote:
Does this mean importing a module can modify other objects, including builtins? Should this spooky-action-at-a-distance be encouraged? OTOH, this already happens in the stdlib with rlcompleter, I assume using monkey-patching. This is a special case for interactive use, though. https://docs.python.org/3/library/rlcompleter.html

On 6/24/21 7:09 AM, Simão Afonso wrote:
Yes, importing a module runs the global code in that module, and that code can not only define the various things in that module but can also manipulate the contents of other modules. This doesn't mean that spooky-action-at-a-distance is always good, but sometimes it is what is needed. You need to be aware of the power that you wield. -- Richard Damon

Steven, you're making a pretty good case here, but a couple questions: 1) The case of newer versions of python adding methods to builtins, like .bit_length is really compelling. But I do wonder how frequently that comes up. It seems to me on this list, that people are very reluctant to add methods to anything builtin (other than dunders) -- particularly ABCs, or classes that might be subclassed, as that might break anything that currently uses that same name. Anyway, is this a one-off? or something that is likely to come up semi-frequently in the future? Note also that the py2 to py3 transition was (Hopefully) an anomaly -- more subtle changes between versions make it less compelling to support old versions for very long. 2) Someone asked a question about the term "Extension Methods" -- I assume it's "Extension attributes", where the new attribute could be anything, yes? 2) Comprehensibility: Seriously, there's a time to realise when arguments against a feature
No -- we're not assuming Python users are idiots -- there is an important difference here: from extensions import flatten flatten(mylist) very clearly adds the name `flatten` to the current module namespace. That itself can be confusing to total newbies, but yes, you can't get anywhere with Python without knowing that. Granted, you still need need to know what `flatten` is, and what it does some other way in any case. Whereas: from extensions use flatten mylist.flatten() does NOT import the name `flatten` into the local namespace -- which I suppose will be "clear" because it's using "use" rather than a regular import, but that might be subtle. But importantly, what it has done is add a name to some particular type -- what type? who knows? In this example, you used the name "mylist", so I can assume it's an extension to list. But if that variable were called "stuff", I"d have absolutely no idea. And as above, you'd need to go find the documentation for the flatten extension method, just as you would for any name in a module, but somehow functions feel more obvious to me. Thinking about this I've found what I think is a key issue for why this may be far less useful for Python that it is for other languages. Using teh "flatten" example, which I imagine you did due to the recent discussion on this list about the such a function as a potential new builtin: Python is dynamically Polymorphic (I may have just made that term up -- I guess it's the same as duck typed) -- but what that means in that context is that I don't usually care exactly what type an object is -- only that it supports particular functionality, so, for instance: from my_utilities import flatten def func_that_works_with_nested_objects(the_things): all_the_things_in_one = flatten(the_things) ... Presumably, that could work with any iterable with iterables in it. but from my_extensions use flatten def func_that_works_with_nested_objects(the_things): all_the_things_in_one = the_things.flatten OOPS! that's only going to work with actual lists. Are you thinking that you could extend an ABC? Or if not that, then at least a superclass and get all subclasses? I'm a bit confused about how the MRO might work there. Anyway, in my mind THAT is the big difference between Python and at least mony of the languages that support extension methods. A "solution" would be to do what we do with numpy -- it has an "asarray()" function that is a no-op if the argument is a numpy array, and creates an array if it's not. WE often put that at the top of a function, so that we can then use all the nifty array stuff inside the function, but not requires the caller to create an array firat. But that buys ALL the numpy functionality, it would be serious overkill for a method or two. It's not a reason it couldn't work, or be useful, but certainly a lot less useful than it might be. In fact, the example for int.bit_length may b e the only compelling use case -- not that method per se, but a built-in type that is rarely duck-typed. That would be integers, floats and strings, at least those are the most common. even ints and floats are two types that are frequently used interchangeably. Side note: I don't see the relevance to extension methods. You seem to be just
listing random facts as if they were objections to the extension method proposal.
Let's keep this civil, and assume good intentions -- if something is irrelevant, it's irrelevant, but please don't assume that the argument was not made in good faith. For my part I've been following this thread, but only recently understood the scope of the proposal well enough to know that e.g. the above issue was not relevant. -Chris B -- Christopher Barker, PhD (Chris) Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 2021-06-24 09:19:36, Christopher Barker wrote:
This explicit namespacing is an important objection, IMHO. What about this syntax:
from extensions use flatten in list
And symmetrically:
from extensions use * in list
Going even further (maybe too far):
from extensions use * in *

On Thu, Jun 24, 2021 at 09:19:36AM -0700, Christopher Barker wrote:
Of course there is no way of knowing how conservative future Steering Councils will be. As Python gets older, all the obviously useful methods will already exist, and the rate of new methods being added will likely decrease. But if you look at the docs for builtins: https://docs.python.org/3/library/stdtypes.html and search for "New in version" you will get an idea of how often builtins gain new methods since Python 3. Similarly for other modules.
Large frameworks and libraries will surely continue to support a large range of versions, even as they drop support for 2.7.
2) Someone asked a question about the term "Extension Methods" -- I assume it's "Extension attributes", where the new attribute could be anything, yes?
In principle, sure. That's a design question to be agreed upon. [...]
Indeed. There's nothing "very clearly" about that until you have learned how imports work and why `from... import...` is different from `import`.
Correct -- it "imports" it into the relevant class namespace, or whatever language people will use to describe it.
That's a fair observation. Do you think that C# uses of LINQ are confused by what is being modified when they say this? using System.Linq; Of course extension methods will be *new and different* until they become old and familiar. It may be that the syntax will make it clear not just where you are "importing" extensions from but what types will be included. That is an excellent point to raise, thank you.
Yes, and we can write obfuscated variable names in Python today too :-)
You're not wrong. I dare say that there will be a learning curve involved with extension methods, like any new technology. If you've never learned about them, it might be confusing to see: mylist.flatten() in code and then try `help(list.flatten)` in the interactive interpreter and get an AttributeError exception, because you didn't notice the "using" at the top of the module. But how is that different from seeing: now = time() in code and then `help(time)` raises a NameError because you didn't notice the import at the top of the module? There's a learning curve in learning to use any tool, and that includes learning to program. [...]
If ABCs use normal attribute lookup, there's no reason why extension methods shouldn't work with them. -- Steve

On 2021-06-21 12:49 p.m., Chris Angelico wrote:
Quite the opposite. You ask the local module (the one that the code was compiled in), and the module decides whether/when to ask the object itself. In other words, every foo.bar would be sugar for __getattr__(foo, "bar") (where __getattr__ defaults to builtins.getattr) instead of being sugar for <builtins.getattr>(foo, "bar") (where <> is used to indicate that it doesn't quite desugar that way - otherwise you'd need to recursively desugar it to builtins.getattr(builtins, "getattr") which uh, doesn't work.)

On Tue, Jun 22, 2021 at 3:55 AM Soni L. <fakedme+py@gmail.com> wrote:
Thanks for clarifying. This doesn't change the problem though - it just changes where the issue shows up. (BTW, what you're describing is closer to __getattribute__ than it is to __getattr__, so if you're proposing this as the semantics, I strongly recommend going with that name.) So, here's the question - a clarification of what I asked vaguely up above. Suppose you have a bunch of these extension methods, and a large project. How are you going to register the right extension methods in the right modules within your project? You're binding the functionality to the module in which the code was compiled, which will make exec/eval basically unable to use them, and that means you'll need some way to set them in each module, or to import the setting from somewhere else. How do you propose doing this? ChrisA

On 2021-06-21 3:01 p.m., Chris Angelico wrote:
Oh, sorry, thought __getattribute__ was the fallback and __getattr__ the one always called, what with getattr -> __getattr__. But yeah, __getattribute__ then.
For exec/eval you just pass in the locals: exec(foo, globals(), locals()) because this __getattribute__ is just a local like any other. As for each module, you'd import them. But not quite with "import": import extension_methods # magic module, probably provides an @extend(class_) e.g. @extend(list) import shallow_flatten import deep_flatten __getattribute__ = extension_methods.getattribute( shallow_flatten.flatten, # uses __name__ deepflatten=deep_flatten.flatten, # name override __getattribute__=__getattribute__, # optional, defaults to builtins.getattr ) This would have to be done for each .py that wants to use the extension methods.

On Mon, Jun 21, 2021 at 3:28 PM Soni L. <fakedme+py@gmail.com> wrote:
I bet you that you could already do this today with a custom import hook. If you want to "easily" experiment with this, I would suggest having a look at https://aroberge.github.io/ideas/docs/html/index.html which likely has all the basic scaffolding that you would need. André Roberge

On Mon, Jun 21, 2021 at 02:54:52PM -0300, Soni L. wrote:
All you've done here is push the problem further along -- how does `__getattr__` (`__getattribute__`?) decide what to do? * Why is this extension-aware version per module, instead of a builtin? * Does that mean the caller has to write it in every module they want to make use of extensions? * Why do we need a second attribute lookup mechanism instead of having the existing mechanism do the work? * And most problematic, if we have an extension method on a type, the builtin getattr ought to pick it up. By the way, per-module `__getattr__` already has a meaning, so this name won't fly. https://www.python.org/dev/peps/pep-0562/ -- Steve

On 2021-06-21 8:42 p.m., Steven D'Aprano wrote:
No, you got it wrong. Extension methods don't go *on* the type being extended. Indeed, that's how they differ from monkeypatching. The whole point of extension methods *is* to be per-module. You could shove it in the existing attribute lookup mechanism (aka the builtins.getattr) but that involves runtime reflection, whereas making a new, per-module attribute lookup mechanism specifically designed to support a per-module feature would be a lot better. Extension methods *do not go on the type*. And sure, let's call it __opcode_load_attr_impl__ instead. Sounds good?

On 2021-06-21 8:57 p.m., Thomas Grainger wrote:
It seems odd that it would be per module and not per scope?
It's unusual to import things at the scope level. Usually things get imported at the module level, so, using module language doesn't seem that bad. But yes, it's per scope, but in practice it's per module because nobody would actually use this per scope even tho they could. :p

I've just thought of a great use-case for extension methods. Hands up who has to write code that runs under multiple versions of Python? *raises my hand* I'm sure I'm not the only one. You probably have written compatibility functions like this: def bit_length(num): try: return num.bit_length() except AttributeError: # fallback implementation goes here ... and then everywhere you want to write `n.bit_length()`, you write `bit_length(n)` instead. Extension methods would let us do this: # compatibility.py @extends(int): def bit_length(self): # fallback implementation goes here ... # mylibrary.py using compatibility num = 42 num.bit_length() Now obviously that isn't going to help with versions too old to support extension methods, but eventually extension methods will be available in the oldest version of Python you care about: # supports Python 3.14 and above Once we reach that point, then backporting new methods to classes becomes a simple matter of using an extension method. No mess, no fuss. As someone who has written a lot of code like that first bit_length compatibility function in my time, I think I've just gone from "Yeah, extension methods seem useful..." to "OMG I WANT THEM TEN YEARS AGO SO I CAN USE THEM RIGHT NOW!!!". Backporting might not be your killer-app for extension methods, but I really do think they might be mine. -- Steve

On Wed, Jun 23 2021 at 20:48:39 +1000, Steven D'Aprano <steve@pearwood.info> wrote:
Of course that means the the standard library might also introduce something new that will be shadowed by one of your custom methods, and then you'll wish you had just used functions or a wrapper class. If you can import extension methods wholesale, you might even be monkeypatching something without realising it, in which case you'll be lucky if things break in an obvious way. Honestly, all the use cases in this thread seem to be much better served by using plain old functions.

On Wed, Jun 23, 2021 at 07:23:19PM +0200, João Santos wrote:
Of course that means the the standard library might also introduce something new that will be shadowed by one of your custom methods,
Extension methods have lower priority than actual methods on the class. So that won't happen. The actual method on the class will shadow the extension method. I'm not sure if you completely understand the use-case I was describing, so let me clarify for you with a concrete example. Ints have a "bit_length" method, starting from Python 2.7. I needed to use that method going all the way back to version 2.4. I have an implementation that works, so I could backport that method to 2.4 through 2.6, except that you can't monkey-patch builtins in Python. So monkey-patching is out. (And besides, I wouldn't want to monkey-patch it: I only need that method in one module. I want to localise the change to only where it is needed.) Subclassing int wouldn't help. I need it to work on actual ints, and any third-party subclasses of int, not just my own custom subclass. (And besides, have you tried to subclass int? It's a real PITA. It's easy enough to write a subclass, but every operation on it returns an actual int instead of the subclass. So you have to write a ton of boilerplate to make int subclasses workable. But I digress.) So a subclass is not a good solution either. That leaves only a function. But that hurts code readability and maintainance. In 2.7 and above, bit_length is a method, not a function. All the documentation for bit_length assumes it is a method. Every tutorial that uses it has it as a method. Other code that uses it treats it as a method. Except my code, where it is a function. Using a function is not a *terrible* solution to the problem of backporting a new feature to older versions of Python. I've done it dozens of times and it's not awful. **But it could be better.** Why can't the backport be a method, just like in 2.7 and above? With extension methods, it can be. Obviously not for Python 2.x code. But plan for the future: if we have extension methods in the language, eventually every version of Python we care about will support it. And then writing compatibility layers will be much simpler.
and then you'll wish you had just used functions or a wrapper class.
Believe me, I won't. I've written dozens of compatibility functions over the last decade or more, going back to Python 2.3. I've written hybrid 2/3 code. Extension methods would not always be useful, but for cases like int.bit_length, it would be a far superior solution.
If you can import extension methods wholesale, you might even be monkeypatching something without realising it
Extension methods is not monkey-patching. It is like adding a global name to one module. If I write: def func(arg): ... in module A.py, that does not introduce func to any other module unless those other modules explicitly import it. Extension methods are exactly analogous: they are only visible in the module where you opt-in to use them. They don't monkey-patch the entire interpreter-wide environment. And because extension methods have a lower priority than actual methods, you cannot override an existing method on a class. You can only extend the class with a new method. -- Steve

Oops, sorry, I neglected to trim my response to João. Please ignore my previous response, with the untrimmed quoting at the start, and give any replies to this. On Wed, Jun 23, 2021 at 07:23:19PM +0200, João Santos wrote:
Of course that means the the standard library might also introduce something new that will be shadowed by one of your custom methods,
Extension methods have lower priority than actual methods on the class. So that won't happen. The actual method on the class will shadow the extension method. I'm not sure if you completely understand the use-case I was describing, so let me clarify for you with a concrete example. Ints have a "bit_length" method, starting from Python 2.7. I needed to use that method going all the way back to version 2.4. I have an implementation that works, so I could backport that method to 2.4 through 2.6, except that you can't monkey-patch builtins in Python. So monkey-patching is out. (And besides, I wouldn't want to monkey-patch it: I only need that method in one module. I want to localise the change to only where it is needed.) Subclassing int wouldn't help. I need it to work on actual ints, and any third-party subclasses of int, not just my own custom subclass. (And besides, have you tried to subclass int? It's a real PITA. It's easy enough to write a subclass, but every operation on it returns an actual int instead of the subclass. So you have to write a ton of boilerplate to make int subclasses workable. But I digress.) So a subclass is not a good solution either. That leaves only a function. But that hurts code readability and maintainance. In 2.7 and above, bit_length is a method, not a function. All the documentation for bit_length assumes it is a method. Every tutorial that uses it has it as a method. Other code that uses it treats it as a method. Except my code, where it is a function. Using a function is not a *terrible* solution to the problem of backporting a new feature to older versions of Python. I've done it dozens of times and it's not awful. **But it could be better.** Why can't the backport be a method, just like in 2.7 and above? With extension methods, it can be. Obviously not for Python 2.x code. But plan for the future: if we have extension methods in the language, eventually every version of Python we care about will support it. And then writing compatibility layers will be much simpler.
and then you'll wish you had just used functions or a wrapper class.
Believe me, I won't. I've written dozens of compatibility functions over the last decade or more, going back to Python 2.3. I've written hybrid 2/3 code. Extension methods would not always be useful, but for cases like int.bit_length, it would be a far superior solution.
If you can import extension methods wholesale, you might even be monkeypatching something without realising it
Extension methods is not monkey-patching. It is like adding a global name to one module. If I write: def func(arg): ... in module A.py, that does not introduce func to any other module unless those other modules explicitly import it. Extension methods are exactly analogous: they are only visible in the module where you opt-in to use them. They don't monkey-patch the entire interpreter-wide environment. And because extension methods have a lower priority than actual methods, you cannot override an existing method on a class. You can only extend the class with a new method. -- Steve

On Thu, Jun 24, 2021 at 7:51 PM Steven D'Aprano <steve@pearwood.info> wrote:
You've given some great arguments for why (5).bit_length() should be allowed to be a thing. (By the way - we keep saying "extension METHODS", but should this be allowed to give non-function attributes too?) But not once have you said where getattr(), hasattr(), etc come into this. The biggest pushback against this proposal has been the assumption that getattr(5, "bit_length")() would have to be the same as (5).bit_length(). Why is that necessary? I've never seen any examples of use-cases for that. Let's tighten this up into a real proposal. (I'm only +0.5 on this, but am willing to be swayed.) * Each module has a registration of (type, name, function) triples. * Each code object is associated with a module. * Compiled code automatically links the module with the code object. (If you instantiate a code object manually, it's on you to pick a module appropriately.) * Attribute lookups use three values: object, attribute name, and module. * If the object does not have the attribute, its MRO is scanned sequentially for a registered method. If one is found, use it. Not mentioned in this proposal: anything relating to getattr or hasattr, which will continue to look only at real methods. There may need to be an enhanced version of PyObject_GetAttr which is able to look up extension methods, but the current one simply wouldn't. Also not mentioned: ABC registration. If you register a class as a subclass of an ABC and then register an extension method on that class, isinstance() will say that it's an instance of the ABC, but the extension method won't be there. I'm inclined to say "tough luck, don't do that", but if there are strong enough use cases, that could be added. But otherwise, I would FAR prefer a much simpler proposal, one which changes only the things that need to be changed. ChrisA

Here's a quick and dirty proof of concept I knocked up in about 20 minutes, demonstrating that no deep compiler magic is needed. It's just a small change to the way `object.__getattribute__` works. I've emulated it with my own base class, since `object` can't be monkey-patched. The proof of concept is probably buggy and incomplete. It isn't intended to be a final, polished production-ready implementation. It's not implementation-agnostic: it requires the ability to inspect the call stack. If you're using IronPython, this may not work. You will notice I didn't need to touch getattr to have it work, let alone hack the interpreter to make it some sort of magical construct. It all works through `__getattribute__`. The registration system is just the easiest thing that I could throw together. There are surely better designs. Run A.py to see it in action. -- Steve

On Fri, Jun 25, 2021 at 3:31 AM Steven D'Aprano <steve@pearwood.info> wrote:
Okay, so you've hidden the magic away a bit, but you have to choose the number [2] for your stack inspection. That means you have to be sure that that's the correct module, in some way. If you do *anything* to disrupt the exact depth of the call stack, that breaks. _hasattr = hasattr def hasattr(obj, attr): return _hasattr(obj, attr) Or any of the other higher level constructs. What if there's a C-level function in there? This is still magic. It's just that the magic has been buried slightly. ChrisA

I've read all the posts in this thread, and am overall at least -0.5 on the idea. I like methods well enough, but mostly it just seems to invite confusion versus the equivalent and existing option of importing functions. I am happy, initially, to stipulate that "some clever technique" is available to make accessing an extension method/attribute efficient. My objection isn't that. Rather, my concern is "spooky action at a distance." It becomes difficult to know whether my object 'foo' will have the '.do_blaz()' method or not. Not difficult like no determinate rule could exist, but difficult in the sense that I'm looking at this one line of code in a thousand line module. The type() of 'foo' is no longer enough information to know the answer. That said, one crucial difference is once an extension method is "used" we are stuck with it for the entire module. In contrast, functions can be both namespaced and redefined. So I can do: import alpha, beta if alpha.do_blaz() == beta.do_blaz(): ... I can also do this: from alpha import do_blaz def myfun(): from beta import do_blaz ... We get scoping and namespaces that extension method lack. So perhaps we could add that? from extensions import do_blaz with extend(list, do_blaz): assert isinstance(foo, list) foo.do_blaz() This would avoid the drawbacks I perceive. On the other hand, it feels like ain't a fair amount of complexity for negligible actual gain.

David Mertz writes:
That said, one crucial difference is once an extension method is "used" we are stuck with it for the entire module.
While you get it for all objects of type(foo) in the whole module, presumably whatever syntax you used to "use" it, you can use to get rid of it, or at least replace it with an extension that raises or logs or somehow signals that the extension is deactivated. Or maybe del works on it.
In contrast, functions can be both namespaced and redefined. [...] We get scoping and namespaces that extension method lack.
But isn't the point of an extension to a class that we want it to be class-wide in that module? I don't see why we should turn extensions into something we can already do with functions. (Assuming we want them at all, and so far only Steven's backporting application makes any sense to me).

On Fri, Jun 25, 2021 at 1:54 PM Ricky Teachey <ricky@teachey.org> wrote:
Would this feature allow me to declare str objects as not iterable in some contexts?
If so, +1.
That depends. If the proposal is to intercept every attribute lookup (parallel to a class's __getattribute__ method), then yes, but if it's a fallback after default behaviour fails (like __getattr__), then no. I suspect that the latter is more likely; it's much easier to avoid recursion problems if real attributes are tried first, plus it's likely to impact performance a lot less. ChrisA

On Fri, Jun 25, 2021 at 7:43 PM Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
It also means that this code applies only when other things have failed, so high performance lookups will still be high performance. But there've been many variants of this proposal in this thread, all subtly different. ChrisA

On Thu, Jun 24, 2021 at 11:53:55PM -0400, Ricky Teachey wrote:
While I'm glad to see some more positivity in this thread, alas, no, extension methods are not a mechanism for making strings non-iterable. Nor do they help with floating point inaccuracies, resolve problems with circular imports, or make the coffee *wink* Extension methods are a technology for extending classes with extra methods, not removing them. -- Steve
participants (17)
-
2QdxY4RzWzUUiLuE@potatochowder.com
-
André Roberge
-
Brendan Barnwell
-
Chris Angelico
-
Christopher Barker
-
David Mertz
-
Johan Vergeer
-
João Santos
-
Richard Damon
-
Ricky Teachey
-
Rob Cliffe
-
Simão Afonso
-
Soni L.
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Thomas Grainger
-
William Pickard