Using explicit parenthesization to convey aspects of semantic meaning?

Hello, How would you feel if explicit parens were used to convey additional semantic meaning? That seems like a pretty dumb question, because, well, parens *are* used to convey additional semantic meaning. E.g.: 1 + 2 + 3 vs 1 + (2 + 3) The result is the same, but somehow I wanted to emphasize that 2 and 3 should be added together, and somehow else. a + b + c vs a + (b + c) Here, there's even no guarantee of the same result, if we have user objects with weirdly overloaded __add__(). Thanks for hanging with me so far, we're getting to the crux of the question: Do you think there can be difference between the following two expressions: obj.meth() (obj.meth)() ? The question is definitely with a trick (why else there would be the intro), and first answer which comes to mind might not be the right one. As a hint, to try to get a grounded answer to that question, it would be useful to look at the difference in disassembly of the above code in CPython3.6 vs CPython3.7 (or later): python3.6 -m dis meth_call.py python3.7 -m dis meth_call.py Then, to try to explain the difference at the suitable level of abstraction. If that doesn't provide enough differentiation, it might be helpful to add the 3rd line: t = obj.meth; t() And run all 3 lines thru CPython3.7, and see if the pattern is now visible, and a distortion in the pattern too. What would be the explanation for all that? For reference, the disassembly of the 3 lines with CPython3.7 is provided: 1 0 LOAD_NAME 0 (obj) 2 LOAD_METHOD 1 (meth) 4 CALL_METHOD 0 6 POP_TOP 2 8 LOAD_NAME 0 (obj) 10 LOAD_METHOD 1 (meth) 12 CALL_METHOD 0 14 POP_TOP 3 16 LOAD_NAME 0 (obj) 18 LOAD_ATTR 1 (meth) 20 STORE_NAME 2 (t) 22 LOAD_NAME 2 (t) 24 CALL_FUNCTION 0 26 POP_TOP ... -- Best regards, Paul mailto:pmiscml@gmail.com

On Mon, Dec 14, 2020 at 9:11 AM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Creating bound method objects can be expensive. Python has a history of noticing ways to improve performance without changing semantics, and implementing them. Details here: https://docs.python.org/3/library/dis.html#opcode-LOAD_METHOD If you force the bound method object to be created (by putting it in a variable), the semantics should be the same, but performance will be lower. Consider: rosuav@sikorsky:~$ python3.10 -c 'import dis; dis.dis(lambda obj: (obj.meth,)[0]())' 1 0 LOAD_FAST 0 (obj) 2 LOAD_ATTR 0 (meth) 4 BUILD_TUPLE 1 6 LOAD_CONST 1 (0) 8 BINARY_SUBSCR 10 CALL_FUNCTION 0 12 RETURN_VALUE rosuav@sikorsky:~$ python3.10 -c 'import dis; dis.dis(lambda obj: (obj.meth(),)[0])' 1 0 LOAD_FAST 0 (obj) 2 LOAD_METHOD 0 (meth) 4 CALL_METHOD 0 6 BUILD_TUPLE 1 8 LOAD_CONST 1 (0) 10 BINARY_SUBSCR 12 RETURN_VALUE rosuav@sikorsky:~$ python3.10 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length(),)[0]' 'f(x)' 2000000 loops, best of 5: 101 nsec per loop rosuav@sikorsky:~$ python3.10 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length,)[0]()' 'f(x)' 2000000 loops, best of 5: 124 nsec per loop rosuav@sikorsky:~$ python3.6 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length(),)[0]' 'f(x)' 10000000 loops, best of 3: 0.124 usec per loop rosuav@sikorsky:~$ python3.6 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length,)[0]()' 'f(x)' 10000000 loops, best of 3: 0.123 usec per loop Measurable improvement in 3.10, indistinguishable in 3.6. This is why lots of us are unimpressed by your strict mode - CPython is perfectly capable of optimizing the common cases without changing the semantics, so why change the semantics? :) ChrisA

Hello, On Mon, 14 Dec 2020 09:37:42 +1100 Chris Angelico <rosuav@gmail.com> wrote:
Thanks for the response. And I know all that. LOAD_METHOD/CALL_METHOD was there in MicroPython right from the start. Like, the very first commit to the project in 2013 already had it: https://github.com/micropython/micropython/commit/429d71943d6b94c7dc3c40a39f...
If you force the bound method object to be created (by putting it in a variable),
But that's what the question was about, and why there was the intro! Let's please go over it again. Do you agree with the following: a + (b + c) <=> t = b + c; a + t ? Where "<=>" is the equivalence operator. I do hope you agree, because it's both basis for evaluation implementation and for refactoring rules, and the latter is especially important for line-oriented language like Python, where wrapping expression across lines requires explicit syntactic markers, which some people consider ugly, so there should be clear rules for splitting long expressions which don't affect there semantic. So ok, if you agree with the above, do you agree with the below: (a.b)() <=> t = a.b; t() ? And I really wonder what depth of non-monotonic logic we can reach on trying to disagree with the above ;-). Python does have cases where syntactic refactoring is not possible. The most infamous example is super() (Which reminds that, when args to it were made optional, it would have been much better to make it just "super.", there would be much less desire to "refactor" it). But the more such places a language has, the less regular, hard to learn, reason about, and optimize the language is. And poorer designed too. So, any language with aspiration to not be called words should avoid such cases. And then again, what can we tell about: "(a.b)() <=> t = a.b; t()" []
But please remember that you're talking with someone who takes LOAD_METHOD for granted, from 2013. And who takes inline caches for granted from 2015. So, what what would be the reason to take all that for granted and still proceeding with the strict mode? Oh, the reasons are obvious: a) it's the natural extension of the above; b) it allows to reach much deeper (straight to the machine code, again), and by much cheaper means (machine code for call will contain the same as in C, no 10x times more code in guards). For comparison, CPython added LOAD_METHOD in 2016. And lookup caching started to be added in 2019. And it took almost 1.5 years to extend caching from a single opcode to 2nd one. 1.5 years, Chris! commit 91234a16367b56ca03ee289f7c03a34d4cfec4c8 Date: Mon Jun 3 21:30:58 2019 +0900 bpo-26219: per opcode cache for LOAD_GLOBAL (GH-12884) commit 109826c8508dd02e06ae0f1784f1d202495a8680 Date: Tue Oct 20 06:22:44 2020 +0100 bpo-42093: Add opcode cache for LOAD_ATTR (GH-22803) And 3rd one, LOAD_NAME, isn't covered, and it's easy to see why: instead of using best-practice uniform inline caches, desired-to-be-better Python semantics spawned the following monsters: co->co_opcache_map = (unsigned char *)PyMem_Calloc(co_size, 1); typedef struct { PyObject *ptr; /* Cached pointer (borrowed reference) */ uint64_t globals_ver; /* ma_version of global dict */ uint64_t builtins_ver; /* ma_version of builtin dict */ } _PyOpcache_LoadGlobal; All that stuff sits in your L1 cache, thrashes something else in and out all the time, and makes it all still slow, slow, slow. "Perfectly capable" you say? Heh. -- Best regards, Paul mailto:pmiscml@gmail.com

On Mon, Dec 14, 2020 at 5:57 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
It really depends on what you mean by "equivalent". For instance, I'm sure YOU will agree that they have the semantic difference of causing an assignment to the name 't'. Additionally, Python will evaluate a before b and c in the first example, but must evaluate b and c, add them together, and only after that evaluate a. So, no, they aren't entirely equivalent. Obviously, in many situations, the programmer will know what's functionally equivalent, but the interpreter can't. Clarify what you mean by equivalence and I will be able to tell you whether I agree or not. (It's okay if your definition of equivalent can't actually be described in terms of actual Python code, just as long as you can explain which differences matter and which don't.) ChrisA

Hello, On Mon, 14 Dec 2020 18:05:07 +1100 Chris Angelico <rosuav@gmail.com> wrote:
I certainly agree. But the level at which I'm trying to discuss this matter is more "abstract interpretation"'ish. For example, "+" is a binary operator, you can't calculate "a + b + c" in one step. There're 2 "+", and thus 2 steps. And an intermediate result should be "stored somewhere". In different computation models that "somewhere" would be different, e.g. in the stack machine model, intermediate result would be stored in a location on the value stack, and in the register machine model - in ("temporary") register. But from abstract interpretation PoV, all those means of storage are equivalent: a named user variable, a stack location, a temporary variable. (They have differences beyond the "storage" aspect, sure.)
So, let's try simple yes/no questions: Example 1: a + b + c vs a + (b + c) Question 1: Do you agree that there's a clear difference between left and right expression? Yes/no. Example 2: a.b() vs (a.b)() Question 2: Do you agree that there's a *similar* difference here as in Example 1? Yes/no. Then of course depending on the outcome of the last question, there would be further questions. Specifically: If yes: How to put a solid formal basis behind the difference in Example 2 (because so far we're just riding on the similarity with Example 1). And how to explain it to wider audience? If no: What is the explanation of such a striking distinction in treatment of Example 1 vs Example 2?
ChrisA
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 8:04 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Yes, there is a difference.
No, there is no difference.
Uhh, it's called precedence and associativity? You know that (a + b + c) is equivalent to ((a + b) + c), not to (a + (b + c)). Is that formal enough? ChrisA

Hello, On Tue, 15 Dec 2020 20:17:37 +1100 Chris Angelico <rosuav@gmail.com> wrote:
Yes. But you answered "no" to the Example 2. What makes you think that (a + b + c) is not equivalent to (a + (b + c)), but (a.b()) is equivalent to ((a.b)()), that's what I'm asking.
ChrisA
-- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 9:22 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Precedence and associativity? Since the two operators have the same precedence (in this case it's the same operator twice), order of evaluation is defined by its left-to-right associativity. Seriously, are you actually unaware of this fundamental, or are you playing dumb to try to make a point? I'm still trying to figure out your point here. The parentheses in one example are changing order of evaluation. In the other, they're not. I do not understand why this is even a question. I'm pretty sure most of us learned *in grade school* about BOMDAS or BODMAS or PEMDAS or whatever mnemonic you pick. Or maybe you have to wait till high school to learn that exponentiation is right-to-left associative. Either way, it's not new knowledge to most programmers. I'm done arguing, unless you actually come up with a real argument. ChrisA

On Tue, Dec 15, 2020 at 11:22 AM Chris Angelico <rosuav@gmail.com> wrote:
I've already put him in my killfile, but probably unwisely, I still see the follow-ups by you and other people I respect and enjoy reading discussion from. It feels like a chimp trying to pantomime a philosopher, really. As someone with a doctorate in philosophy, I feel triggered :-).
I'm pretty sure most of us learned *in grade school* about BOMDAS or BODMAS or PEMDAS or whatever mnemonic you pick.
I don't think I ever learned such acronyms! I mean, yes I learned about order of operations in grade school. But never with a mnemonic. -- The dead increasingly dominate and strangle both the living and the not-yet born. Vampiric capital and undead corporate persons abuse the lives and control the thoughts of homo faber. Ideas, once born, become abortifacients against new conceptions.

On Wed, Dec 16, 2020 at 3:16 AM David Mertz <mertz@gnosis.cx> wrote:
I learned BOMDAS - Brackets, O (varies in expansion but always minor things you don't often see), Multiplication, Division, Addition, Subtraction. For some reason it's also written BODMAS, which has the exact same meaning (since multiplication and division have the same precedence) but is harder to pronounce. PEMDAS uses "parentheses" instead of "brackets" (so it's probably an American English vs British English thing), and "exponentiation" in place of the first vowel. Whichever way you learned it, though, you probably learned a few quirks of algebraic notation that don't really apply to programming (such as the fraction bar), but for the most part, you'd have learned the exact model that most expression evaluators use. ("Most" because, as always, there are exceptions, but it's a good default to start with.) ChrisA

I'm going to answer the original question, even though I don't quite understand it:
Using explicit parenthesization to convey aspects of semantic meaning?
Absolutely not -- for no other reason that it would break potentially a LOT of code. If there IS some new useful semantics that could be conveyed with a set of brackets, we're going to need to use another character. I still don't get what meaning might be called for, but even commonly used brackets, like [] or {} would be better because I don't think they can be currently used anywhere in which they have literally no meaning, like () does. -CHB On Tue, Dec 15, 2020 at 8:38 AM Chris Angelico <rosuav@gmail.com> wrote:
-- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On Tue, Dec 15, 2020 at 04:15:52PM +0000, David Mertz wrote:
It feels like a chimp trying to pantomime a philosopher, really. As someone with a doctorate in philosophy, I feel triggered :-).
Quoting "A Fish Called Wanda": Otto: Apes don't read philosophy. Wanda: Yes they do, Otto, they just don't understand it. Although I don't think that Paul is a Nietzsche-quoting ex-CIA hired killer. At least I hope not. In fairness, Paul has a lot of interesting ideas, even if they don't always pan out. But this thread is an excellent example of how *not* to engage and persuade an audience: - long and rambling, slow to get to the point; - expecting the readers to make the same "Aha!" moment you did when you could just explicitly state your observation; - patronising statements that your readers are just a step away from getting the right answer, but will they make it? - repeated hints that you have seen the correct answer and reached enlightenment, without telling the reader what the answer is; - comparisons and analogies that don't work; (under Python semantics, the closest analogy to `(obj.method)()` is not `a+(b+c)` but `(a+b)+c`) Paul, if you are reading this, you are coming across as neuro-atypical. If that is the case, trust me on this, the strategy you are taking in this thread is very unsuccessful as a persuasive and/or teaching tool. Under existing Python semantics, round brackets (parentheses) have a few different meanings, but the relevant one as far as I can tell is grouping, which changes the order that operations are performed. In expressions, I don't think that there are any cases where brackets change the semantics of operations: `(b + c)` remains the plus operator even with the brackets, it just changes the order of operation relative to any surrounding expression. The only counter-example I can think of where brackets changed the semantics of a statement was the old Python 2 `except ...` statement: except A, B, C, D: block except (A, B, C, D): block If I recall correctly, the first catches exceptions A, B and C, and binds the exception to D; the second catches exceptions A, B, C and D and doesn't bind to anything. As you can imagine, this was an extremely error-prone and surprising "Gotcha". In principle, we could give `(obj.method)()` a distinct meaning to the unbracketed form. But such a distinction would be surprising, it would clash with grouping: (obj.method or fallback_function)() and I have no idea what distinct meaning you want to give it, or why. If you are serious about continuing this thread, please get to the point of *what* change in semantics you want to give the bracketed form and *why* you think it would be useful. -- Steve

On Tue, Dec 15, 2020 at 12:04:44PM +0300, Paul Sokolovsky wrote:
But they aren't equivalent: by definition, a named user variable has a user-visible side-effect, while other forms of storage may not: in high-level languages, stack locations, registers, and temporary variables (in a non-user visible namespace) have no visible side- effects. So this is a critical distinction that you are not making: - there are computations where such intermediate results are visible to the user; - and there are other computations where such intermediate results are not visible to the user. Those two classes are not equivalent, except approximately.
Are we talking about Python? Then yes, there is a clear difference. In the first example, `a + b + c`, execution proceeds left to right: `a + b` first, then the result of that has c added on the right. The second example changes the order of operations: `b + c` is computed first, then a is added on the left.
If we are still talking about Python, then no, there is no difference between the two. In the first example, the name "a" is looked up, then the attribute "b", and then the result of that is called. In the second example, the brackets have no effect: first the name "a" is looked up, then the attribute "b", then the result of that is called. In this case, the brackets do not change the order of operation. It is like comparing `a + b + c` versus `(a + b) + c`. Or for that matter: a + b + c versus (((((((a) + (b)))) + (((c)))))) You can add all the redundant parentheses you like without changing the order of operation. Does this conversation have a point? You keep dropping hints that you want to introduce a semantic difference between `obj.meth` and `(obj.meth)`. Care to tell us what that difference is supposed to be? -- Steve

On 15/12/20 10:04 pm, Paul Sokolovsky wrote:
Yes, because the default order of operations in Python is defined so that a + b + c is the same as (a + b) + c.
No, because the default order of operations here already has a.b evaluated before making the call, so adding the parentheses changes nothing. -- Greg

Hello, On Tue, 15 Dec 2020 23:37:59 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
That's good answer, thanks. But... it doesn't correspond to the implementation reality. As I showed right in my first mail, in "a.b()", "a.b" doesn't get evaluated at all (since CPython3.7). Instead, something completely different gets evaluated. Ok, not "completely", but "sufficiently" different. We can continue to beat on the side of "it's only a bytecode optimization, there's total disconnection between what happens in the compiled bytecode and the language syntax". But what if not?
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On 16/12/20 12:24 am, Paul Sokolovsky wrote:
That's good answer, thanks. But... it doesn't correspond to the implementation reality.
Why are we talking about implementation? You said you wanted to keep to the conceptual level. At that level, there is NO difference at all. -- Greg

Hello, On Wed, 16 Dec 2020 00:50:27 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
I'm not sure how well I was able to convey, but even in the initial message I tried to ask for an asymptotically coherent theory which would apply across layers and would allow to explain all of the following: 1. Surface syntax, and intuitive-semantics difference between "a.b()" and "(a.b)()". (Can add just "a.b" for completeness.) 2. Deeper (i.e. abstract) syntax difference between the above. 3. Codegeneration difference for the above, or in more formal terms, small-step semantics for the above. I specifically write "asymptotically coherent", because we already know that a number of parts are missing (e.g. level 2 above is completely missing so far), and there can be random cases (mostly on bytecode codegeneration end) which don't fit into theory, and to explain which we'd need to apply to outside means like: "oh, we just didn't think about that" or "oh, that's bytecode optimization". Ok, so here's my theory: Which still starts with setting the stage. Besides those simple operators they teach in school, there're others, less simple. Some are still pretty familiar to programmers. For example, C's ternary conditional "?:" operator. It's indeed usually named "?:", but that doesn't tell us much about its actual syntax: expr1 ? expr2 : expr3 So, it's a ternary operator, unlike common unary or binary ones. And it's not prefix, postfix, or infix. Linguistics has a term for a generalized prefix/suffix/infix concept, so let's call such operators "affix". Python also has ternary conditional operator: expr2 if expr1 else expr3 Which shows: a) operator lexics doesn't have to consist of punctuation, letters work too; b) it has different order of expression comparing to C's version. Despite those striking differences, nobody even gets confused. Which shows human ability to see deeper similarity across surface differences, on which ability we're going to rely in the rest of our discussion. And we're almost there. The only intermediate step to consider is the call operator, "()". It's much older than conditional operator in Python, but always was a special one: expr(args) So, it's binary, and it's also affix, as we can't ignore that closing paren. A note about "binary": a *function* may take multiple arguments. However, a *call operator*, in its abstract syntax, takes just a single 2nd arg, of special type "args" (and syntax of that includes zero or more args, positional and keywords, starred and double-starred at trailing positions). With all the above in mind, Python3.7, in a strange twist of fate, and without much ado, has acquired a new operator: the method call, ".()". It's a ternary operator with the following syntax: expr.name(args) It's an affix operator, with its 3 constituent characters nicely spread around the expression it forms. Now, everything falls into its place: An expression like: expr.name is an "attribute access operator" which gets compiled to LOAD_ATTR instruction. expr() is a "call operator", which gets compied to CALL_FUNCTION and: expr.name() is a "method call operator", which gets compiled into LOAD_METHOD and complementary CALL_METHOD opcodes. CPython3.6 and below didn't have ".()" operator, and compiled it as "attr access" + "function call", but CPython3.7 got the new operator, and compiles it as such (the single operator). The ".()" operator is interesting, because it's compounded from existing operators. It thus can be "sliced" using parenthesis into individual operators. And the meaning of (a.b)() is absolutely clear - it says "first compute 'a.b', and then call it", just the same as "a + (b + c)" says "first compute 'b + c', and then add that to 'a'". So, why CPython3.7+ still compiles "(a.b)()" using LOAD_METHOD. The currently proposed explanation in this thread was "optimization", and let's just agree with it ;-). The real reason is of course different, and it would be nice to discuss it further. But still, are there Python implementations which compile "(a.b)()" faithfully, with its baseline semantic meaning? Of course there're. E.g., MicroPython-derived Python implementations compile it in the full accordance with the theory presented here: obj.meth() (obj.meth)() $ pycopy -v -v objmeth.py [] 00 LOAD_NAME obj (cache=0) 04 LOAD_METHOD meth 07 CALL_METHOD n=0 nkw=0 09 POP_TOP 10 LOAD_NAME obj (cache=0) 14 LOAD_ATTR meth (cache=0) 18 CALL_FUNCTION n=0 nkw=0 20 POP_TOP 21 LOAD_CONST_NONE 22 RETURN_VALUE Discussion? Criticism? Concerns?
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

Hello, On Thu, 17 Dec 2020 00:03:51 +0100 Marco Sulla <Marco.Sulla.Python@gmail.com> wrote:
But that's not what this talk is about! It's about a new exciting (hmm, we'll see) feature which, turned out, was there all this time, but was overlooked (so, no patches are needed right away). So, I'm asking fellow Python programmers if they recognize it. If they do, we can consider how to get more from that feature, and maybe some patches will be useful. And if they don't, no patches would help. -- Best regards, Paul mailto:pmiscml@gmail.com

On Thu, Dec 17, 2020 at 8:58 AM Paul Sokolovsky <pmiscml@gmail.com> wrote:
I like it. The idea of a 'method call operator' is quite cute, it's an alternate way of thinking about the situation that seems to be self-consistent (at least on the surface). BUT. It's only worth talking about alternate interpretations if there's a reasonable chance that introducing a new way of thinking about a problem will lead to some improvement: either functional enhancement, or better abstractions. Do you have concrete ideas about how treating this language construct as a new operator might end up bringing tangible benefits? Somewhat relatedly, I very much appreciate the simplicity and clean approach that Python takes with objects (experiencing Ruby briefly having written python made this benefit very clear). With python 3, the distinction between a function and a method has been further reduced, and any change that risks moving us away from `obj.meth()` being functionally equivalent to `getattr(obj, 'meth')()` or `getattr(Cls, 'meth')(obj)` would have to have incredibly strong benefits to outweigh the cost of doing so. Steve

Hello, On Thu, 17 Dec 2020 10:03:14 +0000 Stestagg <stestagg@gmail.com> wrote: []
Thanks!
By now I already mentioned a few times that the whole motivation to introduce it is to improve performance of name lookup operations (specifically, method lookup). And in this regard, it continues on my earlier posted proposal of the "strict execution mode" (https://github.com/pycopy/PycoEPs/blob/master/StrictMode.md)
Do you have concrete ideas about how treating this language construct as a new operator might end up bringing tangible benefits?
Yes, under my proposal, "method call operator" will literally call only methods, where "a method" is defined as "a function lexically contained within the class definition". This means that comparing to the current lookup rules, a.b() won't need to look for "b" inside the "a" instance, but will look straight in a's class. In other words, the lookup process will be largely approaching that of C++. Of course, there's still a big difference that C++ method vtables are indexed by integer id, as all types (and thus, methods) are known at compiler-time, while we still will need to maintain method "vdicts", where we map from symbolic method names to the actual method code. But, that's the price to pain for dynamic-typedness of the language. In other aspects, many existing C++-like optimizations of class hierarchies can be applied (if it all is coupled with the "strict mode part1" proposal above). To avoid ambiguity, shadowing class method with instance attributes won't work: class Foo: def meth(self): pass o = Foo() # doesn't work o.meth = 1 This is not some completely new restriction. For example, following already doesn't work in Python: class A: pass o = A() o.__add__ = lambda self, x: print("me called") o + A() # lambda above is never called So again, a new restriction is nothing but the existing restriction, applied consistently to all methods, not just dunder methods. And as it doesn't work with any method, we also can be explicit about it, instead of "not working silently", like the __add__ example above shows. So, back to 1st example: # Leads to AttributeError or RuntimeError o.meth = 1 That's why, well, it's called "strict mode" - erroneous actions don't pass silently. Now, what if you have an object attribute which stores a callable (no matter is it's bound method, a function, a class, or whatever)? You will need to call it as "(obj.callable_ref)(arg, kw=val)". This is a recent lucky improvement over the previous approach I had in mind, involving resurrecting the apply() function of good old Python1/Python2 days: apply(obj.callable_ref, (arg,), {"kw": val})
There's very good reason why I apply this effort to Python, as it's already more stricter/strongly-typed language than most of its popular cohorts.
The proposal preserves the first-class nature of bound methods, so if you had `obj.meth()`, you can always write `t = obj.meth; t()`, or the forms that you show above, with the same effect. However, the reverse is not true - if you had `obj.attr`, you can't call value of that attribute as `obj.attr()`, because that's the syntax to call methods, not attributes. You'll need to write it as `(obj.attr)()`. That syntax is fully compatible with the existing Python. So, a program which adheres to the strict mode restrictions, will also run in exactly the same way under the standard mode (in particular, under a Python implementation which doesn't implement the strict mode).
Steve
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On Fri, Dec 18, 2020 at 5:02 AM Paul Sokolovsky <pmiscml@gmail.com> wrote:
But the addition operator isn't just calling __add__, so this IS a completely new restriction. You're comparing unrelated things. class A: def __add__(self, other): print("I got called") class B(A): def __add__(self, other): print("Actually I did") A() + B() The operator delegation mechanism doesn't use the class as a means of optimization. It does it because it is the language specification to do so. I said I wasn't going to respond, but this one is SUCH a common misunderstanding that I don't want people led astray by it. "a+b" is NOT implemented as "a.__add__(b)", nor as "type(a).__add__(b)"! Can this thread move off python-ideas and onto pycopy-ideas please? It's not really talking about Python any more, it just pretends to be. ChrisA

Hello, On Fri, 18 Dec 2020 06:09:56 +1100 Chris Angelico <rosuav@gmail.com> wrote:
No, you're just shifting discussion to something else. Special methods assigned to object instances aren't get called in general. If you can show how to assign an arbitrary dunder method to an instance and get it called by operator, please do that. Otherwise, that was exactly the point to show.
So, the language specification for the "strict execution mode" will say that "the only way to define a method is syntactically in the class body". What's your problem with that? You often assign your methods to individual instances after they're created? Please speak up and explain us your usecases. [] -- Best regards, Paul mailto:pmiscml@gmail.com

17.12.20 21:58, Ethan Furman пише:
First, it can use not only method __add__ of the type of a, but also method __radd__ of the type of b. Second, the lookup of __add__ differs from both a.__add__ and type(a).__add__. It differs from a.__add__ because skips an instance dict and __getattr__. It differs from type(a).__add__ because does not fallback to methods of the metaclass. This all is pretty complicated.

Hello, On Fri, 18 Dec 2020 12:29:10 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Surely no, as we touched in the other email. That's how "quick 2-clause idea" differs from "full spec". First one presents the main idea (being able to call methods based on just type namespace, not on combined instance + type namespaces), whereas "full spec" would need to consider "all other cases too". So, it's already clear that mod.func() syntax will continue to work as before. I don't foresee any problems with implementing that, do you? Generally, the semantic changes discussed affect *only* user-defined classes. Module objects, being builtin, aren't affected by new semantic aspect, so I would expect zero changes at all would be required to them in that regard. I'm basing on the architecture of MicroPython-based VM/object model. Again, if you foresee any issues with e.g. CPython, let me know, I can check that. Beyond modules, there're other cases to consider. E.g., I said that an instance attribute cannot shadow class'es *method* of the same name, you can reasonably ask "what about other class fields?". And I'd say "needs to be considered." I'd err on the side of banning any shadowing, but I have to admit I did use a pattern like that more than once myself: class Foo: option = "default" o1 = Foo() print(o1.option) o2 = Foo() o2.option = "overriden" Foo.option = "change default for all objs which didn't override it" I personally wouldn't consider it end of the world to switch that to: def get_option(self): if hasattr(self, "option"): return self.option else: return self.__class__.option But certainly requires more consideration and actual looking at the good corpus of code to see how common is that.
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On 18/12/20 1:48 pm, Paul Sokolovsky wrote:
So, it's already clear that mod.func() syntax will continue to work as before. I don't foresee any problems with implementing that, do you?
What about this: import random class A: def choice(self, stuff): return stuff[0] a = A() def f(x, y): return x.choice(y) print(f(random, [1, 2])) print(f(a, ["buckle", "my shoe"])) How much of this is allowed under your restricted semantics? -- Greg

Hello, On Fri, 18 Dec 2020 17:42:26 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
It's fully allowed, under both part 1 and part 2 of the strict mode. It won't be possible to optimize it in any way under just the "strict mode" idea, but again, it will work. Generally, the idea behind the strict mode is to optimize *dynamic name lookups*. It doesn't deal with *dynamic typing* in any way (well, no more than optimizing dynamic lookups into static (in some, not all cases) would allow, and it does allow that). That's because: a) dynamic typing is what everybody loves about Python (myself including); b) there's already a way to deal with dynamic typing issue - type annotations; c) dealing with typing issues generally requires type inference, and general type inference is a task of quite different scale than the "strict mode" thingy I build here. (But adhoc type inference can be cheap, and again, the strict mode effectively enables type (and even value) inference for many interesting and practical cases.)
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On 17/12/20 8:16 am, Paul Sokolovsky wrote:
With all the above in mind, Python3.7, in a strange twist of fate, and without much ado, has acquired a new operator: the method call, ".()".
Um, no. There is no new operator. You're reading far more into the bytecode change than is actually there.
So, why CPython3.7+ still compiles "(a.b)()" using LOAD_METHOD.
Because a.b() and (a.b)() have the same semantics, *by definition*. That definition has NOT changed. Because they have the same semantics, there is no need to generate different bytecode for them.
All that means is that Micropython is missing a potential optimisation. This is probably a side effect of the way its parser and code generator work, rather than a conscious decision. Now, it's quite possible to imagine a language in which a.b() and (a.b)() mean different things. Does anyone remember Prothon? (It's a language someone was trying to design a while back that was similar to Python but based on prototypes instead of classes.) A feature of Prothon was that a.b() and t = a.b; t() would do quite different things (one would pass a self argument and the other wouldn't). I considered that a bad thing. I *like* the fact that in Python I can use a.b to get a bound method object and call it later, with the same effect as if I'd called it directly. I wouldn't want that to change. Fortunately, it never will, because changing it now would break huge amounts of code. -- Greg

On Thu, Dec 17, 2020 at 10:47 AM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Ewww. Yes, that is definitely a bad thing. Just look at JavaScript, which has that exact distinction - a.b() will set 'this' to a, but t() would set 'this' to...... well, that depends on a lot of things, but it probably won't be a. JS's "arrow functions" behave somewhat more sanely, but at the expense of being per-instance, so the rules are a bit more complicated for constructing them. But at least you don't have to worry about lifting them out of an object. ChrisA

Hello, On Thu, 17 Dec 2020 12:46:17 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
So, here we would need to remember what construes "a good scientific theory". It's something which explain sufficiently wide array of phenomena with sufficiently concise premises, predicts effects of phenomena, and allows to use all that in some way which is "useful". Henceforth, the theory of the call operator was presented. And if we look at the actual code, then "a.b" is compiled into LOAD_ATTR, and "a.b()" into LOAD_METHOD, *exactly* as the theory predicts. It also makes it clear what is missing from the middle-layer (abstract syntax) to capture the needed effect - literally, MethodCall AST node. It also explains what's the meaning (and natural effect) of "(a.b)()". And that explanation agrees with an intuitive meaning of the use of parens for grouping. That meaning being "do stuff in parens before", and indeed, theory tells us, that "(a.b)()" should naturally be implemented by LOAD_ATTR, followed by CALL_FUNCTION. Which is again agrees with how some implementations do that. Given all that, "unlearning" a concept of a method call operator may be not easier than "unlearning" a concept of natural numbers (or real numbers, or transcendental numbers, or complex numbers). Please be my guest.
By *a* particular definition. The theory is above a particular definition. It explains how "a.b" vs "a.b()" should be compiled, and indeed, they that way. It also explains how "(a.b)()" should be compiled, and the fact that particular implementations "optimize" it doesn't invalidate the theory in any way. And yeah, as soon as we, umm, adjust the definition, the theory gets useful to explain what may need to be changed in the codegeneration.
Good nailing down! But it's the same for CPython. CPython compiles "(a.b)()" using LOAD_METHOD not because it consciously "optimizes" it, but simply because it's *unable* to represent the difference between "a.b()" and "(a.b)()". More specific explanation: 1. MicroPython compiler is CST (concrete syntax tree) based, so it sees all those parens. (And yes, it has to do silly things to not miss various optimizations, and still misses a lot.) 2. CPython compiler is AST (abstract syntax tree) based, and as the current ASDL definition (https://github.com/python/cpython/blob/master/Parser/Python.asdl) misses the proper MethodCall node, it conflates "a.b()" and "(a.b)()" together. So, the proper way to address the issue would be to add explicit MethodCall node. So far (for Pycopy), I don't dare for such a step, instead having a new "is_parenform" attribute on existing nodes, telling that corresponding node was parens-wrapped in the surface syntax. (And I had it before, as it's needed to e.g. parse generator expressions with a recursive-decent parser.)
Now, it's quite possible to imagine a language in which a.b() and (a.b)() mean different things.
Not only that! It's even possible to imagine Python dialect where "a.b" and "a.b()" would mean *exactly* what they are - first is attribute access, second is method call, no exceptions allowed. But then the question will arise how to call a callable stored in an attributed. So tell me, Greg (first thing which comes to your mind, for as long as it's "a)" or "b)" please) what do you like better: a) (a.b)() syntax b) apply() being resurrected
I don't remember it. Turned out, google barely remembers it, and I had to give it a bunch of clarifying questions and "do you really mean that?" answer before I got to e.g. http://wiki.c2.com/?ProthonLanguage
Of course it never will. It == the current Python's semantic mode. Instead, new modes will capitalize on the newly discovered features. One such mode, humbly named "the Strict Mode", was presented here on the list recently. But that was only the part 1 of strict mode, . We just discussed the pillar #1 of the strict mode, part 2. For that pillar being the separation between methods and attributes. But you'll ask "perhaps we can separate them, but how can we *clearly* separate those, so there was no ambiguity?". Indeed, that would be pillar #2. It is inspired (retroactively of course, as I had that idea for many years) by feedback received during discussion of the idea of block-scoped variables. Some people immediately responded that they want shadowing to be disallowed (e.g. https://mail.python.org/archives/list/python-ideas@python.org/message/3IKFBQ...). Of course, it doesn't make any sense to disallow shadowing of block-scoped local variables, for that bringing no theoretical or implementation benefits (and practical benefits are addressed by linters or opt-in warnings). But those people were absolutely right - there're places in Python where shadowing is truly harmful. So, their wish is granted, but in the good tradition of wish-granting, not where and not in a way they wanted. For the strict mode, part 2, pillar #2 saying: "It's not allowed to shadow a method with an attribute". And combined, parts 1 and 2 allow to optimize namespace lookups in Python, where part 1 deals with module/class namespaces, and part 2 with object namespaces. What's interesting, that part 2, just like part 1, of the strict mode doesn't really make any "revolutionary" changes. It just exposes, emphasizes, and makes consistent, the properties Python language already has. Ain't that cute? Do you spot any issues, Greg?
-- Greg
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On 17/12/20 11:25 pm, Paul Sokolovsky wrote:
I'm pretty sure whoever added the optimisation fully intended it to apply to (a.b)() as well as a.b() -- given that they are supposed to have the same semantics, why would you *not* want to optimise both? So even if the AST were able to distinguish between them, there would be no reason to do so. Your alternative theory would be of use if you wanted to change the semantics of (a.b)(). But the semantics need to be defined first, then you make the AST and code generation whatever it needs to be to support those semantics.
a) (a.b)() syntax b) apply() being resurrected
I can't answer that without knowing what alternative semantics you have in mind for (a.b)(). You haven't really explained that yet.
We just discussed the pillar #1 of the strict mode, part 2. For that pillar being the separation between methods and attributes.
So you *don't* intend to make any semantic changes? I'm still confused. -- Greg

Hello, On Fri, 18 Dec 2020 01:23:34 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
But I did by now, and you didn't need to wait for me to do it, because "(a.b)()" does *exactly* what *you* or anybody else would think it does, based on your knowledge of what grouping parens in the expressions do. So again, "(a.b)()" first extracts a value of the "b" *attribute* from the object held in variable "a", then calls that value with 0 arguments. That's in striking difference to "a.b()", which calls the *method* "b" of object held in variable "a".
In a sense I don't. I intend to make very fine-grained, finely cut semantic *adjustments*, with a motive of being able to implement the semantics more efficiently, and with a constraint that program valid under the new semantics is also valid (and have the same effect) under the old. Again, this is continuation of the effort previously presented in https://mail.python.org/archives/list/python-ideas@python.org/thread/KGN4Q2E... where you can assess yourself how successful the "also valid under old" part was so far. For indeed, that's the distinguishing trait of how my effort differs from some (many?) other efforts, for example the Prothon project that you quoted. Where people start with already rather distinct than Python ideas, and then can't resist to diverge more, and more, and more. Whereas I try (with an aspiration to be pretty thorough) to make as little changes as possible to achieve the desired effect (which is the ability to write more efficient Python programs, not inventing a new language!)
-- Best regards, Paul mailto:pmiscml@gmail.com

On 18/12/20 1:52 am, Paul Sokolovsky wrote:
We're going round in circles. Are we talking about current Python semantics, or proposed new semantics? Please just give us a clear answer on that. If we're talking about current semantics, then my knowledge of what grouping parens do (which is based on what the Python docs define them to do) means that a.b() and (a.b)() are just different ways of writing *exactly* the same thing.
That's in striking difference to "a.b()", which calls the *method* "b" of object held in variable "a".
No, it doesn't. It calls *whatever* object a.b returns. It only calls a method if b happens to be a method. It might not be. It might be a callable object stored in an instance attribute, or a function stored in a module attribute, in which case the very same syntax is just an ordinary call, not a method call. That last case is something you might want to ponder. The same LOAD_METHOD/CALL_METHOD bytecode sequence is being executed, yet it's *not* performing a method call! How does your new improved theory of method calls explain *that*?
In a sense I don't. I intend to make very fine-grained, finely cut semantic *adjustments*
If you come back when you've figured out exactly what those adjustments will be and are able to explain them clearly, we will have something to talk about. -- Greg

Hello, On Fri, 18 Dec 2020 03:05:02 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yep :-(. I wonder, if reading of each other's arguments more thoroughly could help.
I'm not talking about the current Python semantics, because there's nothing to talk about. We both agreed it won't change, so what do you want to talk about re: it? The only purpose "current Python semantics" serve in the discussion is that new semantics gets explained in terms of difference wrt to the current. Otherwise, I'm talking how the proposed new semantics comes out smoothly and naturally from the existing Python semantics, syntax, and other details. And feedback on the "comes out smoothly and naturally" part is exactly what I'm interested in.
If we're talking about current semantics, then my knowledge
Right, so if the current semantics clouds your view, one approach would be to put it aside, and look at the things with unencumbered look, to see if you can see the things I'm talking about.
Bingo! That's exactly what I seek to improve, so it with much higher probability "returns" a *method*, and not just "whatever".
So, it seems like you have read what I wrote in the message https://mail.python.org/archives/list/python-ideas@python.org/message/PF42DP... , because you explain how the current semantics work, after I explained how it's being changed to work differently. And if something is unclear in my explanation, just let me know. It's not like the message above is 2-line and leaves much for you to guess. It's not 30KB text either, which chews thru all aspects (like my previous proposal does). So, if some aspect *of new semantics* is unclear, I'd be glad to elaborate on it. (Ahead of a 30KB text, which is coming, but much later).
As any other such case - "as implementation artifact grounded in semantic ambiguities of a dynamic language". My aim is to reduce such ambiguities largely, while still preserving dynamic-typing spirit of the language (where it's needed). So, a compiler which employs my previous proposal https://mail.python.org/archives/list/python-ideas@python.org/thread/KGN4Q2E... will be able to optimize that to just CALL_FUNCTION in *most* cases. But not all cases, yeah.
Again, the end of https://mail.python.org/archives/list/python-ideas@python.org/message/PF42DP... drafts the scheme of the new proposed semantics (codenamed "the strict mode part 2"). I will also try to summarize it in a different (hopefully, clearer and more down-to-earth) way in a response to another participant. Eventually I'll also write down a full "spec", but that likely will take quite some time (and then will be a long read). So, if you happen to be really interested in that stuff, while waiting for the new spec, you may be interested to skim thru the previous part of the proposal, https://mail.python.org/archives/list/python-ideas@python.org/thread/KGN4Q2E... or formatted version at https://github.com/pycopy/PycoEPs/blob/master/StrictMode.md
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On Thu, Dec 17, 2020 at 03:52:51PM +0300, Paul Sokolovsky wrote:
In CPython, both generate exactly the same byte-code, and both will call any sort of object. Or *attempt* to call, since there is no guarantee that the attribute returned by `a.b` (with or without parens) will be a callable object. You are imagining differences in behaviour which literally do not exist. ```
--
Steve

On Fri, Dec 18, 2020 at 01:23:34AM +1300, Greg Ewing wrote:
Not only are they *supposed* to have the same semantics, but they *literally do* have the same semantics. The CALL_METHOD op code doesn't just call methods, just as the CALL_FUNCTION op code doesn't just call functions. The only difference between them is the implementation of *how* they perform the call. -- Steve

On Wed, Dec 16, 2020 at 10:16:01PM +0300, Paul Sokolovsky wrote:
With all the above in mind, Python3.7, in a strange twist of fate, and without much ado, has acquired a new operator: the method call, ".()".
No it hasn't. That's not a language feature, it is not documented as a language feature, and it could be removed or changed in any release without any deprecation or notice. It is a pure implementation detail, not a language feature. There is no documentation for a "method call" operator, and no interpreter is required to implement either LOAD_ATTR or CALL_METHOD. Byte code instructions are not part of the Python language. Every interpreter is free to decide on whatever byte codes it likes, including no byte codes at all. IronPython uses whatever primitives are offered by the .Net CLR, Jython uses whatever the JVM offers. Nuitka intends to generate C code; Brython generates Javascript. Suppose that I forked CPython 3.7 or later, and made a *single change* to it. When compiling expressions of the form expr.name(...) # arbitrary number of arguments my compiler looks for an environment variable PYOPTIMIZEMODE. If that environment variable is missing, empty or false, the above expression would be compiled using the old LOAD_ATTR and CALL_FUNCTION opcodes. But if it existed and was true, the LOAD_METHOD and CALL_METHOD opcodes would be used instead. Two questions: (1) Is this a legal Python implementation? (2) Apart from some performance differences, what user-visible difference to the behaviour of the code does that environment variable cause? I think that the answers are (1) Yes and (2) None whatsoever.
It's a ternary operator with the following syntax:
expr.name(args)
No it isn't. It is two pseudo-operators, one of which is a binary "attribute lookup" operator: expr.name and the other is an N-ary "call" operator. I say *pseudo* operator, because the meaning of "operator" is documented in Python, and neither `.` not function call `( ... )` is included as actual operators. But the important thing here is that they are two distinct operations: lookup the attribute, and call the attribute.
The language does not specify what, if any, instructions the dot will be compiled to -- or even if it is compiled *at all*. A pure interpreter with no compilation stage would still be a valid Python implementation (although quite slow). Because Python is Turing complete, we could implement a full Python interpreter using a clockwork "Difference Engine" style machine, or a Turing Machine, or by merely running the code in our brain. None of these require the use of a LOAD_ATTR instruction. The parens make **no semantic difference** which is what we have been saying for **days**. ```
The CPython byte-code is identical, parens or no parens, but more
importantly, the *semantics* of the two expressions, as described by the
language docs, require the two to be identical.
And here is a bracketed expression when LOAD_ATTR gets used:
which categorically falsifies your prediction that a parenthesized
dot expression followed by call will use CALL_METHOD.
--
Steve

On Tue, Dec 15, 2020 at 02:24:48PM +0300, Paul Sokolovsky wrote:
As I showed right in my first mail, in "a.b()", "a.b" doesn't get evaluated at all (since CPython3.7).
`a.b` still has to be looked up, even with the new fast LOAD_METHOD byte-code. The difference is that it may be able to avoid instantiating a MethodType object, since that would be immediately garbage-collected once the function object it wraps is called. The lookup still has to take place: ```
If the above example doesn't convince you that you are mistaken, how
about this?
In this second demonstration, obj.method *doesn't even exist* ahead of
time and has to be created dynamically on attribute lookup, before it
can be called.
--
Steve

Hello, On Mon, 14 Dec 2020 02:17:52 -0500 David Mertz <mertz@gnosis.cx> wrote:
Right, thanks. But the original question was about somewhat different matter: if you agree that there's difference between "a + b + c" vs "a + (b + c)", do you agree that there's a similar in nature difference with "a.b()" vs "(a.b)()"? If no, then why? If yes, then how to explain it better? (e.g. to Python novices). -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 8:08 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
https://docs.python.org/3/reference/expressions.html#operator-precedence ChrisA

Hello, On Tue, 15 Dec 2020 20:18:11 +1100 Chris Angelico <rosuav@gmail.com> wrote:
No worries, that table is not complete. For example, "," (comma) is a (context-dependent) operator in Python, yet that table doesn't have explicit entry for it. Unary "*" and "**" are other context-dependent operators. (Unary "@" too.)
ChrisA
-- Best regards, Paul mailto:pmiscml@gmail.com

On 15/12/20 11:28 pm, Paul Sokolovsky wrote:
Those things aren't considered to be operators. The term "operator" has a fairly specific meaning in Python -- it's not just "any punctuation mark". It's true that the operator precedence table won't tell you the precedence of everything in Python -- you need to consult the grammar for the rest. -- Greg

On Mon, Dec 14, 2020 at 01:09:56AM +0300, Paul Sokolovsky wrote:
Okay, I'll bite. Of course there is a difference: the first statement is ten characters long, the second is 12 characters long. Are you asking for a semantic difference (the two statements do something different) or an implementation difference (the two statements do the same thing in slightly different ways)? Implementation differences are fairly boring (to me): it might happen to be that some Python interpreters happen to compile the first statement into a slightly different set of byte codes to the second. I don't care too much about that, unless there are large performance (speed or memory) differences. For what it is worth, Python 1.5 generates the exact same byte code for both expressions; so does Python 3.9. However the byte code for 1.5 and for 3.9 are different. However, the semantics of the two expressions are more or less identical in all versions of Python, regardless of the byte-code used. (I say "more or less" because there may be subtle differences between versions, relating to the descriptor protocol, or lack there of, old and new style classes, attribute lookup, and handling of unbound methods.) [...]
At a suitable level of abstraction, there is no difference. The suitable level of abstraction is at the level of the Python execution model, where `(expression)` and `expression` mean the same, where the brackets are used for grouping.
That clearly has different semantics from the first two: it has the side-effect of binding a value to the name t. I'm not sure where you think this question is going to lead us. Wherever it is, I wish you would get to the point. Are you suggesting that we give a *semantic* difference to: ( expression ) compared to the unbracketed `expression`? -- Steve

Hello, On Mon, 14 Dec 2020 19:39:27 +1100 Steven D'Aprano <steve@pearwood.info> wrote:
Fair enough.
I'm asking for semantic difference, it's even in the post title. But the semantic difference in not in "what two statements do", but in "what two statements mean". Difference in "doing" is entailed by difference in "meaning". And there may be no difference in "doing", but still difference in "meaning", as the original "1+2+3" vs "1+(2+3)" example was intended to show.
Implementation differences are fairly boring (to me):
Right. How implementation points are brought into the discussion is to show the issue. As mentioned above, the actual progression is the opposite: first there's semantic meaning, then it's implemented. So, what's the semantic meaning of "a.b()" that it gets compiled with LOAD_METHOD bytecode?
Right, and the question is what semantic (not implementational!) shift happened in 3.7 (that's the point when it started to be compiled differently).
However, the semantics of the two expressions are more or less identical in all versions of Python, regardless of the byte-code used.
That's what I'd like to put under scrutiny.
Right, and exactly those "subtle differences" is what I'd like to discuss. I'd like to start however with more of abstract model of difference meaning, but afterwards, if the common ground is found, it would be interesting to check specific not-exactly-on-surface Python features which you list. []
The level of abstraction I'm talking about is where you look not just at "`(expression)` vs `expression`" but at: expression <op> expression vs expression <op> (expression) Where <op> is an arbitrary operator. As we already concluded, those do have difference, even considering such a simple operator as "+". So... what can we say about the difference between a.b() and (a.b)() then?
But that's yet another good argument to introduce block-level scoping to Python (in addition to already stated great arguments), because then, (a.b)() will be *exactly* equivalent to (inside a function): if 1: const t = a.b t() This neither gets affected by the surrounding environment (all variables introduced are new, regardless of their names), nor affects it (all variables are block-local, and not visible outside the block).
I'm not sure where you think this question is going to lead us. Wherever it is, I wish you would get to the point.
I'm sorry if this looks like a quiz, that's not the intention. But I really would like to see if other people can spot in this stuff what I spotted (after pondering about it), and I don't want to bias you in any way by jumping to my "conclusions". I do believe we'll get there, but then I don't want to be biased myself either. That's why it's step-by-step process, and I appreciate the people here are willing to walk it.
Hopefully, that was answered above. To end the post with the summary, I'm suggesting that there's difference between: expression <op> expression vs expression <op> (expression) Which is hopefully hard to disagree with. Then I'm asking, how consistent are we with understanding and appreciating that difference, taking the example of: a.b() vs (a.b)() (And it's not a purely language-lawyering question, it has practical consequences.) -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 8:49 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Have you read the release notes? https://docs.python.org/3/whatsnew/3.7.html#optimizations Method calls are now up to 20% faster due to the bytecode changes which avoid creating bound method instances. (Contributed by Yury Selivanov and INADA Naoki in bpo-26110.) It is an *optimization*. There are NO semantic differences, other than the ones you're artificially creating in order to probe this. (I don't consider "the output of dis.dis()" to be a semantic difference.) Why do you keep bringing up irrelevant questions that involve order of operations? The opcode you're asking about is *just* an optimization for "look up this method, then immediately call it" that avoids the construction of a temporary bound method object. The parentheses are a COMPLETE red herring here. What is your point? ChrisA

On 15/12/20 10:49 pm, Paul Sokolovsky wrote:
There was no semantic shift. The change had *nothing* to do with semantics. It was *purely* an optimisation. I'm not sure what we can say to make this any clearer.
There is *sometimes* a difference, depending on exactly what the two expressions are, and what <op> is.
There is no inconsistency. Note also that: 1 + 2 * 3 is the same as 1 + (2 * 3) because the default order of operations already has * evaluated before +. The same kind of thing is happening with a.b() vs (a.b)(). -- Greg

On Tue, Dec 15, 2020 at 12:49:26PM +0300, Paul Sokolovsky wrote:
So far all you have talked about is implementation differences such as whether intermediate results are put on a stack or not, and differences in byte-code from one version of Python to another.
Right, so why are you wasting time talking about what they *do*, i.e. whether they put intermediate results on the stack, or in a register?
In the case of ints, there is no difference in meaning. For integers, addition is associative, and the order does not matter. So here you *say* you are talking about semantics, but you are actually talking about implementation. With integers, the semantics of all of these are precisely the same: 1 + 2 + 3 (1 + 2) + 3 1 + (2 + 3) 3 + (2 + 1) etc. The order in which you *do* the additions makes no difference to the semantics.
- Look up the name "a" in the local namespace; - look up the attribute name "b" according to a's MRO, including the use of the descriptor protocol, slots, etc; - call whatever object gets returned. [...]
Absolutely none. There was a semantic shift, but it was back in Python 2.2 when new style classes and the descriptor protocol were introduced.
Okay, the major semantic differences include: - in Python 1.5, attribute name look-ups call `__getattr__` if the name is not found in the object's MRO; - in Python 3.9, attribute name look-ups first call `__getattribute__` before checking the MRO and `__getattr__`; - the MRO is calculated differently between 1.5 and 3.9; - in 3.9, the descriptor protocol may be invoked; - descriptors and the descriptor protocol did not exist in 1.5; - there are a few differences in the possible types of `obj.meth` between the versions, e.g. Python 1.5 had both bound and unbound instance methods, while Python 3.9 does not. There may be other differences, but those are the most important ones I can remember.
See above.
That is an extremely straight-forward change in execution order. Whether that makes any semantic difference depends on whether the operations involved are associative or not.
There isn't one. Even though the `.` (dot) and `x()` (call) are not actual operators, we can treat them as pseudo-operators. According to Python's precedence rules, the first expression `a.b()` is the same as: - lookup name a - lookup attribute b - call and the second `(a.b)()` is: - lookup name a - lookup attribute b - call which is precisely the same. The parens make no semantic difference. Paul, I think you have fooled yourself by comparing two *different* situations. You compare a use of parens where they change the order of operations: a + b + c a + (b + c) but you should be looking at this: (a + b) + c # parens are redundant and make no difference That is exactly equivalent to your method example: (obj.meth)() # left pair of parens are redundant
You are changing the rules of the discussion as you go. You said nothing about a hypothetical new feature of block scopes and constants. You gave an example of existing Python code: t = obj.meth; t() There is no if block here, no new scope, no constants. t is a plain old ordinary local variabe in the current scope.
There may or may not be a difference, depending on the associativity and precedence rules involved.
Where as this example has only a single interpretation of the precedence: - lookup a in the current scope; - lookup attribute b on a; - call the result. There's no other order of operations available, so no way for the parens to change the order of operations: - you cannot call a.b until you have looked up b on a; - you cannot lookup b on a until you have looked up a. Let's be concrete: s = "hello world" s.upper() # returns "HELLO WORLD" There is only one possible order of operations: - lookup s; - lookup "upper" on s; - call the resulting method. You cannot use parens to change the order of operations: - call the resulting method first (what method?) - lookup "upper" on s (what's s?) - lastly lookup s (too late!) -- Steve

On 13/12/2020 22:09, Paul Sokolovsky wrote:
No. The value of an expression in parentheses is the value of the expression inside the parentheses, and in this case does not affect the order of evaluation.
The explanation is an optimisation introduced in 3.7 that the use of an intermediate variable prevents. The compiler applies it when it can see the only use of the attribute is an immediately following call. Having burrowed into the implementation, I'm certain it tries hard to be indistinguishable from the unoptimised implementation (and succeeds I think), even to the point of failing in the same way when that is the programmed outcome. LOAD_METHOD goes far enough down the execution of LOAD_ATTR to be sure the bound object would be a types.MethodType containing a pair of pointers that CALL_FUNCTION would have to unpack, and pushes the pointers on the stack instead of creating the new object. Otherwise it completes LOAD_ATTR and pushes a bound object and a NULL, which is what CALL_METHOD uses to decide which case it is dealing with. The meaning of the code is what it does detectably in Python, not what it compiles to (for some definition of "detectably" that doesn't include disassembly). Jeff Allen

Hello, On Tue, 15 Dec 2020 08:25:25 +0000 Jeff Allen <ja.py@farowl.co.uk> wrote:
You're on the right track. (Well, I mean you're on the same track as me.) So, what's the order of evaluation and what's being evaluated at all?
Right. But I would suggest us rising up in abstraction level a bit, and think not in terms of "intermediate variables" but in terms of "intermediate storage locations". More details in my today's reply to Chris Angelico.
You're now just a step away from the "right answer". Will you make it? I did. And sorry, the whole point of the discussion if to see if the whole path, each step on it, and the final answer is as unavoidable as I now imagine them to be, so I can't push you towards it ;-).
Having burrowed into the implementation,
Great! As I mentioned in the other replies, I brought implementation matters (disassembly) to represent the matter better. But the proper way is to start with semantics, and then consider how to implement it (and those considerations can have feedback effect on desired semantics too of course). So, regardless of whether it was done like that or not in that case (when LOAD_METHOD was introduced), let's think what semantics [could have] lead to LOAD_METHOD vs LOAD_ATTR implementation? []
Jeff Allen
-- Best regards, Paul mailto:pmiscml@gmail.com

On 15/12/20 11:16 pm, Paul Sokolovsky wrote:
The fact that it's a *named* intermediate storage location is important, because it means the programmer can see it, and will expect it to hold a bound method. So the compiler can't optimise away the bound method creation in that case. Well, it could if it could prove that the intermediate value isn't used for anything else subsequently, but that seems like a very rare thing for anyone to do. Why bother naming it if you're only going to call it once and then throw it away? So the compiler only bothers with the most common case.
You're now just a step away from the "right answer". Will you make it?
I'll be interested to find out what you think the "right" answer is. Or what the actual question is, for that matter -- that's still not entirely clear. -- Greg

On Tue, Dec 15, 2020 at 01:16:21PM +0300, Paul Sokolovsky wrote:
You're now just a step away from the "right answer". Will you make it? I did.
Sorry Paul, but you didn't. You fooled yourself by comparing chalk and cheese, and imagining that because you can eat cheese (change the order of operation by using parens, which is a semantic difference), you can also eat chalk (imagine a semantic difference between `obj.method` and `(obj.method)`). Your mistake was comparing `(obj.method)()` with `a + (b + c)` when you should have compared it to `(a + b) + c`. Not every source code difference has a semantic difference: x = 1 x=1 x = 1 x = 1 or None x = (1) all mean the same thing. Putting parens around the final (1) changes nothing. Let's get away from using round brackets for function call, because it clashes with the use of round brackets for grouping. All we really need is a *postfix unary operator*. x() # the brackets are "postfix unary zero-argument function call" Let's use the factorial operator, "bang" `!` instead. obj.attr! has to be evaluated from left to right under Python's rules. You can't apply the factorial operator until you have looked up attr on obj, and you cannot lookup attr until you have looked up obj. So the only possible order of operations is left to right. This is not meaningful: # attr factorial first, then lookup obj, then the dot lookup obj.(attr!) but while this is meaningful, the order of operations is unchanged: # lookup obj first, then dot lookup attr, then factorial (obj.attr)! Replace the factorial postfix operator with the `()` call operator, and the logic remains the same. -- Steve

Hello, On Tue, 15 Dec 2020 23:28:53 +1100 Steven D'Aprano <steve@pearwood.info> wrote:
But that's not what I was talking about. I was talking about difference between `obj.method()` and `(obj.method)()`, but you in your recent emails keep reducing that to `obj.method` and `(obj.method)`. Sorry, but you can't just drop those parens at the end, they have the specific meaning (a call). And I initially optimized presentation for number of characters with examples like "a.b()", but perhaps I should have written "a.b(foo, bar, baz)", so the "(foo, bar, baz)" part was big and pronounced and there was no desire to drop it silently.
Your mistake was comparing `(obj.method)()` with `a + (b + c)` when you should have compared it to `(a + b) + c`.
No, the comparison was to show that placing parens *does* already change semantic meaning of operator sequences. The example could only confuse you if you try to compare chalk and cheese, sorry, "+" and method call, directly. They behave in regard to parens *similarly*, but not *exactly*, how you might have thought.
Yeah, but imagine if parens were put like: x(= 1). That would completely change the meaning. Like, it would become SyntaxError in current Python, but that actually means we could assign a new meaning to it! Indeed, how many proposals to use up that syntax we had here on python-ideas? 1, 2, 3? Zero you say? Oh, I'm sure someone will accept the challenge ;-). For example, I believe we had proposals for syntax like x(a=) already, no?
Let's get away from using round brackets for function call, because it clashes with the use of round brackets for grouping.
I wouldn't say they "clash". They are completely disambiguated syntax-wise. But based on the discussion we have here, it's fair to say that some people get confused seeing parens with different meaning together, for example you keep dropping one of the parens pair ;-).
All we really need is a *postfix unary operator*.
Warm! I'd even say "hot", but I know it won't click, even with the following clarifications: 1. Function call is not a postfix unary operator. It's binary operator. It's syntax is: expr(args) 2. It's not even an infix operator, like for example "+". You can't really ignore that closing paren - without it, the syntax is invalid. So, function call operator, "()" is such a funky operator which is "spread around" the new expression it forms.
Sounds good, for as long as they're separate operators. But look at the conditional Python operator: foo if bar else baz Getting warmer, no?
No necessarily. Some operators are less simple than the others. Let the conditional operator be the witness.
-- Steve
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On 2020-12-15 5:16 a.m., Paul Sokolovsky wrote:
Oh, that was the point of the discussion? Wonderful then, I can easily answer. Considering that, over multiple days of discussion, literally nobody came to the same conclusion that you did, then it's obvious that the while path, each step on it, and the final answer are NOT as unavoidable as you imagine them to be. I think we can consider this case closed Alexandre Brault

Hello, On Thu, 17 Dec 2020 15:28:47 -0500 Alexandre Brault <abrault@mapgears.com> wrote:
Right, that means the spec for "strict mode, part 2" will need to be as long and detailed as that for "strict mode, part 1", and collect together all the stuff which was discussed here over these multiple days. Whereas if other people would have come to that conclusion, it could be much shorter and faster to write.
I think we can consider this case closed
Alexandre Brault
-- Best regards, Paul mailto:pmiscml@gmail.com

On Mon, Dec 14, 2020 at 9:11 AM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Creating bound method objects can be expensive. Python has a history of noticing ways to improve performance without changing semantics, and implementing them. Details here: https://docs.python.org/3/library/dis.html#opcode-LOAD_METHOD If you force the bound method object to be created (by putting it in a variable), the semantics should be the same, but performance will be lower. Consider: rosuav@sikorsky:~$ python3.10 -c 'import dis; dis.dis(lambda obj: (obj.meth,)[0]())' 1 0 LOAD_FAST 0 (obj) 2 LOAD_ATTR 0 (meth) 4 BUILD_TUPLE 1 6 LOAD_CONST 1 (0) 8 BINARY_SUBSCR 10 CALL_FUNCTION 0 12 RETURN_VALUE rosuav@sikorsky:~$ python3.10 -c 'import dis; dis.dis(lambda obj: (obj.meth(),)[0])' 1 0 LOAD_FAST 0 (obj) 2 LOAD_METHOD 0 (meth) 4 CALL_METHOD 0 6 BUILD_TUPLE 1 8 LOAD_CONST 1 (0) 10 BINARY_SUBSCR 12 RETURN_VALUE rosuav@sikorsky:~$ python3.10 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length(),)[0]' 'f(x)' 2000000 loops, best of 5: 101 nsec per loop rosuav@sikorsky:~$ python3.10 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length,)[0]()' 'f(x)' 2000000 loops, best of 5: 124 nsec per loop rosuav@sikorsky:~$ python3.6 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length(),)[0]' 'f(x)' 10000000 loops, best of 3: 0.124 usec per loop rosuav@sikorsky:~$ python3.6 -m timeit -s 'x, f = 142857, lambda obj: (obj.bit_length,)[0]()' 'f(x)' 10000000 loops, best of 3: 0.123 usec per loop Measurable improvement in 3.10, indistinguishable in 3.6. This is why lots of us are unimpressed by your strict mode - CPython is perfectly capable of optimizing the common cases without changing the semantics, so why change the semantics? :) ChrisA

Hello, On Mon, 14 Dec 2020 09:37:42 +1100 Chris Angelico <rosuav@gmail.com> wrote:
Thanks for the response. And I know all that. LOAD_METHOD/CALL_METHOD was there in MicroPython right from the start. Like, the very first commit to the project in 2013 already had it: https://github.com/micropython/micropython/commit/429d71943d6b94c7dc3c40a39f...
If you force the bound method object to be created (by putting it in a variable),
But that's what the question was about, and why there was the intro! Let's please go over it again. Do you agree with the following: a + (b + c) <=> t = b + c; a + t ? Where "<=>" is the equivalence operator. I do hope you agree, because it's both basis for evaluation implementation and for refactoring rules, and the latter is especially important for line-oriented language like Python, where wrapping expression across lines requires explicit syntactic markers, which some people consider ugly, so there should be clear rules for splitting long expressions which don't affect there semantic. So ok, if you agree with the above, do you agree with the below: (a.b)() <=> t = a.b; t() ? And I really wonder what depth of non-monotonic logic we can reach on trying to disagree with the above ;-). Python does have cases where syntactic refactoring is not possible. The most infamous example is super() (Which reminds that, when args to it were made optional, it would have been much better to make it just "super.", there would be much less desire to "refactor" it). But the more such places a language has, the less regular, hard to learn, reason about, and optimize the language is. And poorer designed too. So, any language with aspiration to not be called words should avoid such cases. And then again, what can we tell about: "(a.b)() <=> t = a.b; t()" []
But please remember that you're talking with someone who takes LOAD_METHOD for granted, from 2013. And who takes inline caches for granted from 2015. So, what what would be the reason to take all that for granted and still proceeding with the strict mode? Oh, the reasons are obvious: a) it's the natural extension of the above; b) it allows to reach much deeper (straight to the machine code, again), and by much cheaper means (machine code for call will contain the same as in C, no 10x times more code in guards). For comparison, CPython added LOAD_METHOD in 2016. And lookup caching started to be added in 2019. And it took almost 1.5 years to extend caching from a single opcode to 2nd one. 1.5 years, Chris! commit 91234a16367b56ca03ee289f7c03a34d4cfec4c8 Date: Mon Jun 3 21:30:58 2019 +0900 bpo-26219: per opcode cache for LOAD_GLOBAL (GH-12884) commit 109826c8508dd02e06ae0f1784f1d202495a8680 Date: Tue Oct 20 06:22:44 2020 +0100 bpo-42093: Add opcode cache for LOAD_ATTR (GH-22803) And 3rd one, LOAD_NAME, isn't covered, and it's easy to see why: instead of using best-practice uniform inline caches, desired-to-be-better Python semantics spawned the following monsters: co->co_opcache_map = (unsigned char *)PyMem_Calloc(co_size, 1); typedef struct { PyObject *ptr; /* Cached pointer (borrowed reference) */ uint64_t globals_ver; /* ma_version of global dict */ uint64_t builtins_ver; /* ma_version of builtin dict */ } _PyOpcache_LoadGlobal; All that stuff sits in your L1 cache, thrashes something else in and out all the time, and makes it all still slow, slow, slow. "Perfectly capable" you say? Heh. -- Best regards, Paul mailto:pmiscml@gmail.com

On Mon, Dec 14, 2020 at 5:57 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
It really depends on what you mean by "equivalent". For instance, I'm sure YOU will agree that they have the semantic difference of causing an assignment to the name 't'. Additionally, Python will evaluate a before b and c in the first example, but must evaluate b and c, add them together, and only after that evaluate a. So, no, they aren't entirely equivalent. Obviously, in many situations, the programmer will know what's functionally equivalent, but the interpreter can't. Clarify what you mean by equivalence and I will be able to tell you whether I agree or not. (It's okay if your definition of equivalent can't actually be described in terms of actual Python code, just as long as you can explain which differences matter and which don't.) ChrisA

Hello, On Mon, 14 Dec 2020 18:05:07 +1100 Chris Angelico <rosuav@gmail.com> wrote:
I certainly agree. But the level at which I'm trying to discuss this matter is more "abstract interpretation"'ish. For example, "+" is a binary operator, you can't calculate "a + b + c" in one step. There're 2 "+", and thus 2 steps. And an intermediate result should be "stored somewhere". In different computation models that "somewhere" would be different, e.g. in the stack machine model, intermediate result would be stored in a location on the value stack, and in the register machine model - in ("temporary") register. But from abstract interpretation PoV, all those means of storage are equivalent: a named user variable, a stack location, a temporary variable. (They have differences beyond the "storage" aspect, sure.)
So, let's try simple yes/no questions: Example 1: a + b + c vs a + (b + c) Question 1: Do you agree that there's a clear difference between left and right expression? Yes/no. Example 2: a.b() vs (a.b)() Question 2: Do you agree that there's a *similar* difference here as in Example 1? Yes/no. Then of course depending on the outcome of the last question, there would be further questions. Specifically: If yes: How to put a solid formal basis behind the difference in Example 2 (because so far we're just riding on the similarity with Example 1). And how to explain it to wider audience? If no: What is the explanation of such a striking distinction in treatment of Example 1 vs Example 2?
ChrisA
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 8:04 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Yes, there is a difference.
No, there is no difference.
Uhh, it's called precedence and associativity? You know that (a + b + c) is equivalent to ((a + b) + c), not to (a + (b + c)). Is that formal enough? ChrisA

Hello, On Tue, 15 Dec 2020 20:17:37 +1100 Chris Angelico <rosuav@gmail.com> wrote:
Yes. But you answered "no" to the Example 2. What makes you think that (a + b + c) is not equivalent to (a + (b + c)), but (a.b()) is equivalent to ((a.b)()), that's what I'm asking.
ChrisA
-- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 9:22 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Precedence and associativity? Since the two operators have the same precedence (in this case it's the same operator twice), order of evaluation is defined by its left-to-right associativity. Seriously, are you actually unaware of this fundamental, or are you playing dumb to try to make a point? I'm still trying to figure out your point here. The parentheses in one example are changing order of evaluation. In the other, they're not. I do not understand why this is even a question. I'm pretty sure most of us learned *in grade school* about BOMDAS or BODMAS or PEMDAS or whatever mnemonic you pick. Or maybe you have to wait till high school to learn that exponentiation is right-to-left associative. Either way, it's not new knowledge to most programmers. I'm done arguing, unless you actually come up with a real argument. ChrisA

On Tue, Dec 15, 2020 at 11:22 AM Chris Angelico <rosuav@gmail.com> wrote:
I've already put him in my killfile, but probably unwisely, I still see the follow-ups by you and other people I respect and enjoy reading discussion from. It feels like a chimp trying to pantomime a philosopher, really. As someone with a doctorate in philosophy, I feel triggered :-).
I'm pretty sure most of us learned *in grade school* about BOMDAS or BODMAS or PEMDAS or whatever mnemonic you pick.
I don't think I ever learned such acronyms! I mean, yes I learned about order of operations in grade school. But never with a mnemonic. -- The dead increasingly dominate and strangle both the living and the not-yet born. Vampiric capital and undead corporate persons abuse the lives and control the thoughts of homo faber. Ideas, once born, become abortifacients against new conceptions.

On Wed, Dec 16, 2020 at 3:16 AM David Mertz <mertz@gnosis.cx> wrote:
I learned BOMDAS - Brackets, O (varies in expansion but always minor things you don't often see), Multiplication, Division, Addition, Subtraction. For some reason it's also written BODMAS, which has the exact same meaning (since multiplication and division have the same precedence) but is harder to pronounce. PEMDAS uses "parentheses" instead of "brackets" (so it's probably an American English vs British English thing), and "exponentiation" in place of the first vowel. Whichever way you learned it, though, you probably learned a few quirks of algebraic notation that don't really apply to programming (such as the fraction bar), but for the most part, you'd have learned the exact model that most expression evaluators use. ("Most" because, as always, there are exceptions, but it's a good default to start with.) ChrisA

I'm going to answer the original question, even though I don't quite understand it:
Using explicit parenthesization to convey aspects of semantic meaning?
Absolutely not -- for no other reason that it would break potentially a LOT of code. If there IS some new useful semantics that could be conveyed with a set of brackets, we're going to need to use another character. I still don't get what meaning might be called for, but even commonly used brackets, like [] or {} would be better because I don't think they can be currently used anywhere in which they have literally no meaning, like () does. -CHB On Tue, Dec 15, 2020 at 8:38 AM Chris Angelico <rosuav@gmail.com> wrote:
-- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On Tue, Dec 15, 2020 at 04:15:52PM +0000, David Mertz wrote:
It feels like a chimp trying to pantomime a philosopher, really. As someone with a doctorate in philosophy, I feel triggered :-).
Quoting "A Fish Called Wanda": Otto: Apes don't read philosophy. Wanda: Yes they do, Otto, they just don't understand it. Although I don't think that Paul is a Nietzsche-quoting ex-CIA hired killer. At least I hope not. In fairness, Paul has a lot of interesting ideas, even if they don't always pan out. But this thread is an excellent example of how *not* to engage and persuade an audience: - long and rambling, slow to get to the point; - expecting the readers to make the same "Aha!" moment you did when you could just explicitly state your observation; - patronising statements that your readers are just a step away from getting the right answer, but will they make it? - repeated hints that you have seen the correct answer and reached enlightenment, without telling the reader what the answer is; - comparisons and analogies that don't work; (under Python semantics, the closest analogy to `(obj.method)()` is not `a+(b+c)` but `(a+b)+c`) Paul, if you are reading this, you are coming across as neuro-atypical. If that is the case, trust me on this, the strategy you are taking in this thread is very unsuccessful as a persuasive and/or teaching tool. Under existing Python semantics, round brackets (parentheses) have a few different meanings, but the relevant one as far as I can tell is grouping, which changes the order that operations are performed. In expressions, I don't think that there are any cases where brackets change the semantics of operations: `(b + c)` remains the plus operator even with the brackets, it just changes the order of operation relative to any surrounding expression. The only counter-example I can think of where brackets changed the semantics of a statement was the old Python 2 `except ...` statement: except A, B, C, D: block except (A, B, C, D): block If I recall correctly, the first catches exceptions A, B and C, and binds the exception to D; the second catches exceptions A, B, C and D and doesn't bind to anything. As you can imagine, this was an extremely error-prone and surprising "Gotcha". In principle, we could give `(obj.method)()` a distinct meaning to the unbracketed form. But such a distinction would be surprising, it would clash with grouping: (obj.method or fallback_function)() and I have no idea what distinct meaning you want to give it, or why. If you are serious about continuing this thread, please get to the point of *what* change in semantics you want to give the bracketed form and *why* you think it would be useful. -- Steve

On Tue, Dec 15, 2020 at 12:04:44PM +0300, Paul Sokolovsky wrote:
But they aren't equivalent: by definition, a named user variable has a user-visible side-effect, while other forms of storage may not: in high-level languages, stack locations, registers, and temporary variables (in a non-user visible namespace) have no visible side- effects. So this is a critical distinction that you are not making: - there are computations where such intermediate results are visible to the user; - and there are other computations where such intermediate results are not visible to the user. Those two classes are not equivalent, except approximately.
Are we talking about Python? Then yes, there is a clear difference. In the first example, `a + b + c`, execution proceeds left to right: `a + b` first, then the result of that has c added on the right. The second example changes the order of operations: `b + c` is computed first, then a is added on the left.
If we are still talking about Python, then no, there is no difference between the two. In the first example, the name "a" is looked up, then the attribute "b", and then the result of that is called. In the second example, the brackets have no effect: first the name "a" is looked up, then the attribute "b", then the result of that is called. In this case, the brackets do not change the order of operation. It is like comparing `a + b + c` versus `(a + b) + c`. Or for that matter: a + b + c versus (((((((a) + (b)))) + (((c)))))) You can add all the redundant parentheses you like without changing the order of operation. Does this conversation have a point? You keep dropping hints that you want to introduce a semantic difference between `obj.meth` and `(obj.meth)`. Care to tell us what that difference is supposed to be? -- Steve

On 15/12/20 10:04 pm, Paul Sokolovsky wrote:
Yes, because the default order of operations in Python is defined so that a + b + c is the same as (a + b) + c.
No, because the default order of operations here already has a.b evaluated before making the call, so adding the parentheses changes nothing. -- Greg

Hello, On Tue, 15 Dec 2020 23:37:59 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
That's good answer, thanks. But... it doesn't correspond to the implementation reality. As I showed right in my first mail, in "a.b()", "a.b" doesn't get evaluated at all (since CPython3.7). Instead, something completely different gets evaluated. Ok, not "completely", but "sufficiently" different. We can continue to beat on the side of "it's only a bytecode optimization, there's total disconnection between what happens in the compiled bytecode and the language syntax". But what if not?
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On 16/12/20 12:24 am, Paul Sokolovsky wrote:
That's good answer, thanks. But... it doesn't correspond to the implementation reality.
Why are we talking about implementation? You said you wanted to keep to the conceptual level. At that level, there is NO difference at all. -- Greg

Hello, On Wed, 16 Dec 2020 00:50:27 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
I'm not sure how well I was able to convey, but even in the initial message I tried to ask for an asymptotically coherent theory which would apply across layers and would allow to explain all of the following: 1. Surface syntax, and intuitive-semantics difference between "a.b()" and "(a.b)()". (Can add just "a.b" for completeness.) 2. Deeper (i.e. abstract) syntax difference between the above. 3. Codegeneration difference for the above, or in more formal terms, small-step semantics for the above. I specifically write "asymptotically coherent", because we already know that a number of parts are missing (e.g. level 2 above is completely missing so far), and there can be random cases (mostly on bytecode codegeneration end) which don't fit into theory, and to explain which we'd need to apply to outside means like: "oh, we just didn't think about that" or "oh, that's bytecode optimization". Ok, so here's my theory: Which still starts with setting the stage. Besides those simple operators they teach in school, there're others, less simple. Some are still pretty familiar to programmers. For example, C's ternary conditional "?:" operator. It's indeed usually named "?:", but that doesn't tell us much about its actual syntax: expr1 ? expr2 : expr3 So, it's a ternary operator, unlike common unary or binary ones. And it's not prefix, postfix, or infix. Linguistics has a term for a generalized prefix/suffix/infix concept, so let's call such operators "affix". Python also has ternary conditional operator: expr2 if expr1 else expr3 Which shows: a) operator lexics doesn't have to consist of punctuation, letters work too; b) it has different order of expression comparing to C's version. Despite those striking differences, nobody even gets confused. Which shows human ability to see deeper similarity across surface differences, on which ability we're going to rely in the rest of our discussion. And we're almost there. The only intermediate step to consider is the call operator, "()". It's much older than conditional operator in Python, but always was a special one: expr(args) So, it's binary, and it's also affix, as we can't ignore that closing paren. A note about "binary": a *function* may take multiple arguments. However, a *call operator*, in its abstract syntax, takes just a single 2nd arg, of special type "args" (and syntax of that includes zero or more args, positional and keywords, starred and double-starred at trailing positions). With all the above in mind, Python3.7, in a strange twist of fate, and without much ado, has acquired a new operator: the method call, ".()". It's a ternary operator with the following syntax: expr.name(args) It's an affix operator, with its 3 constituent characters nicely spread around the expression it forms. Now, everything falls into its place: An expression like: expr.name is an "attribute access operator" which gets compiled to LOAD_ATTR instruction. expr() is a "call operator", which gets compied to CALL_FUNCTION and: expr.name() is a "method call operator", which gets compiled into LOAD_METHOD and complementary CALL_METHOD opcodes. CPython3.6 and below didn't have ".()" operator, and compiled it as "attr access" + "function call", but CPython3.7 got the new operator, and compiles it as such (the single operator). The ".()" operator is interesting, because it's compounded from existing operators. It thus can be "sliced" using parenthesis into individual operators. And the meaning of (a.b)() is absolutely clear - it says "first compute 'a.b', and then call it", just the same as "a + (b + c)" says "first compute 'b + c', and then add that to 'a'". So, why CPython3.7+ still compiles "(a.b)()" using LOAD_METHOD. The currently proposed explanation in this thread was "optimization", and let's just agree with it ;-). The real reason is of course different, and it would be nice to discuss it further. But still, are there Python implementations which compile "(a.b)()" faithfully, with its baseline semantic meaning? Of course there're. E.g., MicroPython-derived Python implementations compile it in the full accordance with the theory presented here: obj.meth() (obj.meth)() $ pycopy -v -v objmeth.py [] 00 LOAD_NAME obj (cache=0) 04 LOAD_METHOD meth 07 CALL_METHOD n=0 nkw=0 09 POP_TOP 10 LOAD_NAME obj (cache=0) 14 LOAD_ATTR meth (cache=0) 18 CALL_FUNCTION n=0 nkw=0 20 POP_TOP 21 LOAD_CONST_NONE 22 RETURN_VALUE Discussion? Criticism? Concerns?
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

Hello, On Thu, 17 Dec 2020 00:03:51 +0100 Marco Sulla <Marco.Sulla.Python@gmail.com> wrote:
But that's not what this talk is about! It's about a new exciting (hmm, we'll see) feature which, turned out, was there all this time, but was overlooked (so, no patches are needed right away). So, I'm asking fellow Python programmers if they recognize it. If they do, we can consider how to get more from that feature, and maybe some patches will be useful. And if they don't, no patches would help. -- Best regards, Paul mailto:pmiscml@gmail.com

On Thu, Dec 17, 2020 at 8:58 AM Paul Sokolovsky <pmiscml@gmail.com> wrote:
I like it. The idea of a 'method call operator' is quite cute, it's an alternate way of thinking about the situation that seems to be self-consistent (at least on the surface). BUT. It's only worth talking about alternate interpretations if there's a reasonable chance that introducing a new way of thinking about a problem will lead to some improvement: either functional enhancement, or better abstractions. Do you have concrete ideas about how treating this language construct as a new operator might end up bringing tangible benefits? Somewhat relatedly, I very much appreciate the simplicity and clean approach that Python takes with objects (experiencing Ruby briefly having written python made this benefit very clear). With python 3, the distinction between a function and a method has been further reduced, and any change that risks moving us away from `obj.meth()` being functionally equivalent to `getattr(obj, 'meth')()` or `getattr(Cls, 'meth')(obj)` would have to have incredibly strong benefits to outweigh the cost of doing so. Steve

Hello, On Thu, 17 Dec 2020 10:03:14 +0000 Stestagg <stestagg@gmail.com> wrote: []
Thanks!
By now I already mentioned a few times that the whole motivation to introduce it is to improve performance of name lookup operations (specifically, method lookup). And in this regard, it continues on my earlier posted proposal of the "strict execution mode" (https://github.com/pycopy/PycoEPs/blob/master/StrictMode.md)
Do you have concrete ideas about how treating this language construct as a new operator might end up bringing tangible benefits?
Yes, under my proposal, "method call operator" will literally call only methods, where "a method" is defined as "a function lexically contained within the class definition". This means that comparing to the current lookup rules, a.b() won't need to look for "b" inside the "a" instance, but will look straight in a's class. In other words, the lookup process will be largely approaching that of C++. Of course, there's still a big difference that C++ method vtables are indexed by integer id, as all types (and thus, methods) are known at compiler-time, while we still will need to maintain method "vdicts", where we map from symbolic method names to the actual method code. But, that's the price to pain for dynamic-typedness of the language. In other aspects, many existing C++-like optimizations of class hierarchies can be applied (if it all is coupled with the "strict mode part1" proposal above). To avoid ambiguity, shadowing class method with instance attributes won't work: class Foo: def meth(self): pass o = Foo() # doesn't work o.meth = 1 This is not some completely new restriction. For example, following already doesn't work in Python: class A: pass o = A() o.__add__ = lambda self, x: print("me called") o + A() # lambda above is never called So again, a new restriction is nothing but the existing restriction, applied consistently to all methods, not just dunder methods. And as it doesn't work with any method, we also can be explicit about it, instead of "not working silently", like the __add__ example above shows. So, back to 1st example: # Leads to AttributeError or RuntimeError o.meth = 1 That's why, well, it's called "strict mode" - erroneous actions don't pass silently. Now, what if you have an object attribute which stores a callable (no matter is it's bound method, a function, a class, or whatever)? You will need to call it as "(obj.callable_ref)(arg, kw=val)". This is a recent lucky improvement over the previous approach I had in mind, involving resurrecting the apply() function of good old Python1/Python2 days: apply(obj.callable_ref, (arg,), {"kw": val})
There's very good reason why I apply this effort to Python, as it's already more stricter/strongly-typed language than most of its popular cohorts.
The proposal preserves the first-class nature of bound methods, so if you had `obj.meth()`, you can always write `t = obj.meth; t()`, or the forms that you show above, with the same effect. However, the reverse is not true - if you had `obj.attr`, you can't call value of that attribute as `obj.attr()`, because that's the syntax to call methods, not attributes. You'll need to write it as `(obj.attr)()`. That syntax is fully compatible with the existing Python. So, a program which adheres to the strict mode restrictions, will also run in exactly the same way under the standard mode (in particular, under a Python implementation which doesn't implement the strict mode).
Steve
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On Fri, Dec 18, 2020 at 5:02 AM Paul Sokolovsky <pmiscml@gmail.com> wrote:
But the addition operator isn't just calling __add__, so this IS a completely new restriction. You're comparing unrelated things. class A: def __add__(self, other): print("I got called") class B(A): def __add__(self, other): print("Actually I did") A() + B() The operator delegation mechanism doesn't use the class as a means of optimization. It does it because it is the language specification to do so. I said I wasn't going to respond, but this one is SUCH a common misunderstanding that I don't want people led astray by it. "a+b" is NOT implemented as "a.__add__(b)", nor as "type(a).__add__(b)"! Can this thread move off python-ideas and onto pycopy-ideas please? It's not really talking about Python any more, it just pretends to be. ChrisA

Hello, On Fri, 18 Dec 2020 06:09:56 +1100 Chris Angelico <rosuav@gmail.com> wrote:
No, you're just shifting discussion to something else. Special methods assigned to object instances aren't get called in general. If you can show how to assign an arbitrary dunder method to an instance and get it called by operator, please do that. Otherwise, that was exactly the point to show.
So, the language specification for the "strict execution mode" will say that "the only way to define a method is syntactically in the class body". What's your problem with that? You often assign your methods to individual instances after they're created? Please speak up and explain us your usecases. [] -- Best regards, Paul mailto:pmiscml@gmail.com

17.12.20 21:58, Ethan Furman пише:
First, it can use not only method __add__ of the type of a, but also method __radd__ of the type of b. Second, the lookup of __add__ differs from both a.__add__ and type(a).__add__. It differs from a.__add__ because skips an instance dict and __getattr__. It differs from type(a).__add__ because does not fallback to methods of the metaclass. This all is pretty complicated.

Hello, On Fri, 18 Dec 2020 12:29:10 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Surely no, as we touched in the other email. That's how "quick 2-clause idea" differs from "full spec". First one presents the main idea (being able to call methods based on just type namespace, not on combined instance + type namespaces), whereas "full spec" would need to consider "all other cases too". So, it's already clear that mod.func() syntax will continue to work as before. I don't foresee any problems with implementing that, do you? Generally, the semantic changes discussed affect *only* user-defined classes. Module objects, being builtin, aren't affected by new semantic aspect, so I would expect zero changes at all would be required to them in that regard. I'm basing on the architecture of MicroPython-based VM/object model. Again, if you foresee any issues with e.g. CPython, let me know, I can check that. Beyond modules, there're other cases to consider. E.g., I said that an instance attribute cannot shadow class'es *method* of the same name, you can reasonably ask "what about other class fields?". And I'd say "needs to be considered." I'd err on the side of banning any shadowing, but I have to admit I did use a pattern like that more than once myself: class Foo: option = "default" o1 = Foo() print(o1.option) o2 = Foo() o2.option = "overriden" Foo.option = "change default for all objs which didn't override it" I personally wouldn't consider it end of the world to switch that to: def get_option(self): if hasattr(self, "option"): return self.option else: return self.__class__.option But certainly requires more consideration and actual looking at the good corpus of code to see how common is that.
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On 18/12/20 1:48 pm, Paul Sokolovsky wrote:
So, it's already clear that mod.func() syntax will continue to work as before. I don't foresee any problems with implementing that, do you?
What about this: import random class A: def choice(self, stuff): return stuff[0] a = A() def f(x, y): return x.choice(y) print(f(random, [1, 2])) print(f(a, ["buckle", "my shoe"])) How much of this is allowed under your restricted semantics? -- Greg

Hello, On Fri, 18 Dec 2020 17:42:26 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
It's fully allowed, under both part 1 and part 2 of the strict mode. It won't be possible to optimize it in any way under just the "strict mode" idea, but again, it will work. Generally, the idea behind the strict mode is to optimize *dynamic name lookups*. It doesn't deal with *dynamic typing* in any way (well, no more than optimizing dynamic lookups into static (in some, not all cases) would allow, and it does allow that). That's because: a) dynamic typing is what everybody loves about Python (myself including); b) there's already a way to deal with dynamic typing issue - type annotations; c) dealing with typing issues generally requires type inference, and general type inference is a task of quite different scale than the "strict mode" thingy I build here. (But adhoc type inference can be cheap, and again, the strict mode effectively enables type (and even value) inference for many interesting and practical cases.)
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On 17/12/20 8:16 am, Paul Sokolovsky wrote:
With all the above in mind, Python3.7, in a strange twist of fate, and without much ado, has acquired a new operator: the method call, ".()".
Um, no. There is no new operator. You're reading far more into the bytecode change than is actually there.
So, why CPython3.7+ still compiles "(a.b)()" using LOAD_METHOD.
Because a.b() and (a.b)() have the same semantics, *by definition*. That definition has NOT changed. Because they have the same semantics, there is no need to generate different bytecode for them.
All that means is that Micropython is missing a potential optimisation. This is probably a side effect of the way its parser and code generator work, rather than a conscious decision. Now, it's quite possible to imagine a language in which a.b() and (a.b)() mean different things. Does anyone remember Prothon? (It's a language someone was trying to design a while back that was similar to Python but based on prototypes instead of classes.) A feature of Prothon was that a.b() and t = a.b; t() would do quite different things (one would pass a self argument and the other wouldn't). I considered that a bad thing. I *like* the fact that in Python I can use a.b to get a bound method object and call it later, with the same effect as if I'd called it directly. I wouldn't want that to change. Fortunately, it never will, because changing it now would break huge amounts of code. -- Greg

On Thu, Dec 17, 2020 at 10:47 AM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Ewww. Yes, that is definitely a bad thing. Just look at JavaScript, which has that exact distinction - a.b() will set 'this' to a, but t() would set 'this' to...... well, that depends on a lot of things, but it probably won't be a. JS's "arrow functions" behave somewhat more sanely, but at the expense of being per-instance, so the rules are a bit more complicated for constructing them. But at least you don't have to worry about lifting them out of an object. ChrisA

Hello, On Thu, 17 Dec 2020 12:46:17 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
So, here we would need to remember what construes "a good scientific theory". It's something which explain sufficiently wide array of phenomena with sufficiently concise premises, predicts effects of phenomena, and allows to use all that in some way which is "useful". Henceforth, the theory of the call operator was presented. And if we look at the actual code, then "a.b" is compiled into LOAD_ATTR, and "a.b()" into LOAD_METHOD, *exactly* as the theory predicts. It also makes it clear what is missing from the middle-layer (abstract syntax) to capture the needed effect - literally, MethodCall AST node. It also explains what's the meaning (and natural effect) of "(a.b)()". And that explanation agrees with an intuitive meaning of the use of parens for grouping. That meaning being "do stuff in parens before", and indeed, theory tells us, that "(a.b)()" should naturally be implemented by LOAD_ATTR, followed by CALL_FUNCTION. Which is again agrees with how some implementations do that. Given all that, "unlearning" a concept of a method call operator may be not easier than "unlearning" a concept of natural numbers (or real numbers, or transcendental numbers, or complex numbers). Please be my guest.
By *a* particular definition. The theory is above a particular definition. It explains how "a.b" vs "a.b()" should be compiled, and indeed, they that way. It also explains how "(a.b)()" should be compiled, and the fact that particular implementations "optimize" it doesn't invalidate the theory in any way. And yeah, as soon as we, umm, adjust the definition, the theory gets useful to explain what may need to be changed in the codegeneration.
Good nailing down! But it's the same for CPython. CPython compiles "(a.b)()" using LOAD_METHOD not because it consciously "optimizes" it, but simply because it's *unable* to represent the difference between "a.b()" and "(a.b)()". More specific explanation: 1. MicroPython compiler is CST (concrete syntax tree) based, so it sees all those parens. (And yes, it has to do silly things to not miss various optimizations, and still misses a lot.) 2. CPython compiler is AST (abstract syntax tree) based, and as the current ASDL definition (https://github.com/python/cpython/blob/master/Parser/Python.asdl) misses the proper MethodCall node, it conflates "a.b()" and "(a.b)()" together. So, the proper way to address the issue would be to add explicit MethodCall node. So far (for Pycopy), I don't dare for such a step, instead having a new "is_parenform" attribute on existing nodes, telling that corresponding node was parens-wrapped in the surface syntax. (And I had it before, as it's needed to e.g. parse generator expressions with a recursive-decent parser.)
Now, it's quite possible to imagine a language in which a.b() and (a.b)() mean different things.
Not only that! It's even possible to imagine Python dialect where "a.b" and "a.b()" would mean *exactly* what they are - first is attribute access, second is method call, no exceptions allowed. But then the question will arise how to call a callable stored in an attributed. So tell me, Greg (first thing which comes to your mind, for as long as it's "a)" or "b)" please) what do you like better: a) (a.b)() syntax b) apply() being resurrected
I don't remember it. Turned out, google barely remembers it, and I had to give it a bunch of clarifying questions and "do you really mean that?" answer before I got to e.g. http://wiki.c2.com/?ProthonLanguage
Of course it never will. It == the current Python's semantic mode. Instead, new modes will capitalize on the newly discovered features. One such mode, humbly named "the Strict Mode", was presented here on the list recently. But that was only the part 1 of strict mode, . We just discussed the pillar #1 of the strict mode, part 2. For that pillar being the separation between methods and attributes. But you'll ask "perhaps we can separate them, but how can we *clearly* separate those, so there was no ambiguity?". Indeed, that would be pillar #2. It is inspired (retroactively of course, as I had that idea for many years) by feedback received during discussion of the idea of block-scoped variables. Some people immediately responded that they want shadowing to be disallowed (e.g. https://mail.python.org/archives/list/python-ideas@python.org/message/3IKFBQ...). Of course, it doesn't make any sense to disallow shadowing of block-scoped local variables, for that bringing no theoretical or implementation benefits (and practical benefits are addressed by linters or opt-in warnings). But those people were absolutely right - there're places in Python where shadowing is truly harmful. So, their wish is granted, but in the good tradition of wish-granting, not where and not in a way they wanted. For the strict mode, part 2, pillar #2 saying: "It's not allowed to shadow a method with an attribute". And combined, parts 1 and 2 allow to optimize namespace lookups in Python, where part 1 deals with module/class namespaces, and part 2 with object namespaces. What's interesting, that part 2, just like part 1, of the strict mode doesn't really make any "revolutionary" changes. It just exposes, emphasizes, and makes consistent, the properties Python language already has. Ain't that cute? Do you spot any issues, Greg?
-- Greg
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On 17/12/20 11:25 pm, Paul Sokolovsky wrote:
I'm pretty sure whoever added the optimisation fully intended it to apply to (a.b)() as well as a.b() -- given that they are supposed to have the same semantics, why would you *not* want to optimise both? So even if the AST were able to distinguish between them, there would be no reason to do so. Your alternative theory would be of use if you wanted to change the semantics of (a.b)(). But the semantics need to be defined first, then you make the AST and code generation whatever it needs to be to support those semantics.
a) (a.b)() syntax b) apply() being resurrected
I can't answer that without knowing what alternative semantics you have in mind for (a.b)(). You haven't really explained that yet.
We just discussed the pillar #1 of the strict mode, part 2. For that pillar being the separation between methods and attributes.
So you *don't* intend to make any semantic changes? I'm still confused. -- Greg

Hello, On Fri, 18 Dec 2020 01:23:34 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
But I did by now, and you didn't need to wait for me to do it, because "(a.b)()" does *exactly* what *you* or anybody else would think it does, based on your knowledge of what grouping parens in the expressions do. So again, "(a.b)()" first extracts a value of the "b" *attribute* from the object held in variable "a", then calls that value with 0 arguments. That's in striking difference to "a.b()", which calls the *method* "b" of object held in variable "a".
In a sense I don't. I intend to make very fine-grained, finely cut semantic *adjustments*, with a motive of being able to implement the semantics more efficiently, and with a constraint that program valid under the new semantics is also valid (and have the same effect) under the old. Again, this is continuation of the effort previously presented in https://mail.python.org/archives/list/python-ideas@python.org/thread/KGN4Q2E... where you can assess yourself how successful the "also valid under old" part was so far. For indeed, that's the distinguishing trait of how my effort differs from some (many?) other efforts, for example the Prothon project that you quoted. Where people start with already rather distinct than Python ideas, and then can't resist to diverge more, and more, and more. Whereas I try (with an aspiration to be pretty thorough) to make as little changes as possible to achieve the desired effect (which is the ability to write more efficient Python programs, not inventing a new language!)
-- Best regards, Paul mailto:pmiscml@gmail.com

On 18/12/20 1:52 am, Paul Sokolovsky wrote:
We're going round in circles. Are we talking about current Python semantics, or proposed new semantics? Please just give us a clear answer on that. If we're talking about current semantics, then my knowledge of what grouping parens do (which is based on what the Python docs define them to do) means that a.b() and (a.b)() are just different ways of writing *exactly* the same thing.
That's in striking difference to "a.b()", which calls the *method* "b" of object held in variable "a".
No, it doesn't. It calls *whatever* object a.b returns. It only calls a method if b happens to be a method. It might not be. It might be a callable object stored in an instance attribute, or a function stored in a module attribute, in which case the very same syntax is just an ordinary call, not a method call. That last case is something you might want to ponder. The same LOAD_METHOD/CALL_METHOD bytecode sequence is being executed, yet it's *not* performing a method call! How does your new improved theory of method calls explain *that*?
In a sense I don't. I intend to make very fine-grained, finely cut semantic *adjustments*
If you come back when you've figured out exactly what those adjustments will be and are able to explain them clearly, we will have something to talk about. -- Greg

Hello, On Fri, 18 Dec 2020 03:05:02 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yep :-(. I wonder, if reading of each other's arguments more thoroughly could help.
I'm not talking about the current Python semantics, because there's nothing to talk about. We both agreed it won't change, so what do you want to talk about re: it? The only purpose "current Python semantics" serve in the discussion is that new semantics gets explained in terms of difference wrt to the current. Otherwise, I'm talking how the proposed new semantics comes out smoothly and naturally from the existing Python semantics, syntax, and other details. And feedback on the "comes out smoothly and naturally" part is exactly what I'm interested in.
If we're talking about current semantics, then my knowledge
Right, so if the current semantics clouds your view, one approach would be to put it aside, and look at the things with unencumbered look, to see if you can see the things I'm talking about.
Bingo! That's exactly what I seek to improve, so it with much higher probability "returns" a *method*, and not just "whatever".
So, it seems like you have read what I wrote in the message https://mail.python.org/archives/list/python-ideas@python.org/message/PF42DP... , because you explain how the current semantics work, after I explained how it's being changed to work differently. And if something is unclear in my explanation, just let me know. It's not like the message above is 2-line and leaves much for you to guess. It's not 30KB text either, which chews thru all aspects (like my previous proposal does). So, if some aspect *of new semantics* is unclear, I'd be glad to elaborate on it. (Ahead of a 30KB text, which is coming, but much later).
As any other such case - "as implementation artifact grounded in semantic ambiguities of a dynamic language". My aim is to reduce such ambiguities largely, while still preserving dynamic-typing spirit of the language (where it's needed). So, a compiler which employs my previous proposal https://mail.python.org/archives/list/python-ideas@python.org/thread/KGN4Q2E... will be able to optimize that to just CALL_FUNCTION in *most* cases. But not all cases, yeah.
Again, the end of https://mail.python.org/archives/list/python-ideas@python.org/message/PF42DP... drafts the scheme of the new proposed semantics (codenamed "the strict mode part 2"). I will also try to summarize it in a different (hopefully, clearer and more down-to-earth) way in a response to another participant. Eventually I'll also write down a full "spec", but that likely will take quite some time (and then will be a long read). So, if you happen to be really interested in that stuff, while waiting for the new spec, you may be interested to skim thru the previous part of the proposal, https://mail.python.org/archives/list/python-ideas@python.org/thread/KGN4Q2E... or formatted version at https://github.com/pycopy/PycoEPs/blob/master/StrictMode.md
-- Greg
-- Best regards, Paul mailto:pmiscml@gmail.com

On Thu, Dec 17, 2020 at 03:52:51PM +0300, Paul Sokolovsky wrote:
In CPython, both generate exactly the same byte-code, and both will call any sort of object. Or *attempt* to call, since there is no guarantee that the attribute returned by `a.b` (with or without parens) will be a callable object. You are imagining differences in behaviour which literally do not exist. ```
--
Steve

On Fri, Dec 18, 2020 at 01:23:34AM +1300, Greg Ewing wrote:
Not only are they *supposed* to have the same semantics, but they *literally do* have the same semantics. The CALL_METHOD op code doesn't just call methods, just as the CALL_FUNCTION op code doesn't just call functions. The only difference between them is the implementation of *how* they perform the call. -- Steve

On Wed, Dec 16, 2020 at 10:16:01PM +0300, Paul Sokolovsky wrote:
With all the above in mind, Python3.7, in a strange twist of fate, and without much ado, has acquired a new operator: the method call, ".()".
No it hasn't. That's not a language feature, it is not documented as a language feature, and it could be removed or changed in any release without any deprecation or notice. It is a pure implementation detail, not a language feature. There is no documentation for a "method call" operator, and no interpreter is required to implement either LOAD_ATTR or CALL_METHOD. Byte code instructions are not part of the Python language. Every interpreter is free to decide on whatever byte codes it likes, including no byte codes at all. IronPython uses whatever primitives are offered by the .Net CLR, Jython uses whatever the JVM offers. Nuitka intends to generate C code; Brython generates Javascript. Suppose that I forked CPython 3.7 or later, and made a *single change* to it. When compiling expressions of the form expr.name(...) # arbitrary number of arguments my compiler looks for an environment variable PYOPTIMIZEMODE. If that environment variable is missing, empty or false, the above expression would be compiled using the old LOAD_ATTR and CALL_FUNCTION opcodes. But if it existed and was true, the LOAD_METHOD and CALL_METHOD opcodes would be used instead. Two questions: (1) Is this a legal Python implementation? (2) Apart from some performance differences, what user-visible difference to the behaviour of the code does that environment variable cause? I think that the answers are (1) Yes and (2) None whatsoever.
It's a ternary operator with the following syntax:
expr.name(args)
No it isn't. It is two pseudo-operators, one of which is a binary "attribute lookup" operator: expr.name and the other is an N-ary "call" operator. I say *pseudo* operator, because the meaning of "operator" is documented in Python, and neither `.` not function call `( ... )` is included as actual operators. But the important thing here is that they are two distinct operations: lookup the attribute, and call the attribute.
The language does not specify what, if any, instructions the dot will be compiled to -- or even if it is compiled *at all*. A pure interpreter with no compilation stage would still be a valid Python implementation (although quite slow). Because Python is Turing complete, we could implement a full Python interpreter using a clockwork "Difference Engine" style machine, or a Turing Machine, or by merely running the code in our brain. None of these require the use of a LOAD_ATTR instruction. The parens make **no semantic difference** which is what we have been saying for **days**. ```
The CPython byte-code is identical, parens or no parens, but more
importantly, the *semantics* of the two expressions, as described by the
language docs, require the two to be identical.
And here is a bracketed expression when LOAD_ATTR gets used:
which categorically falsifies your prediction that a parenthesized
dot expression followed by call will use CALL_METHOD.
--
Steve

On Tue, Dec 15, 2020 at 02:24:48PM +0300, Paul Sokolovsky wrote:
As I showed right in my first mail, in "a.b()", "a.b" doesn't get evaluated at all (since CPython3.7).
`a.b` still has to be looked up, even with the new fast LOAD_METHOD byte-code. The difference is that it may be able to avoid instantiating a MethodType object, since that would be immediately garbage-collected once the function object it wraps is called. The lookup still has to take place: ```
If the above example doesn't convince you that you are mistaken, how
about this?
In this second demonstration, obj.method *doesn't even exist* ahead of
time and has to be created dynamically on attribute lookup, before it
can be called.
--
Steve

Hello, On Mon, 14 Dec 2020 02:17:52 -0500 David Mertz <mertz@gnosis.cx> wrote:
Right, thanks. But the original question was about somewhat different matter: if you agree that there's difference between "a + b + c" vs "a + (b + c)", do you agree that there's a similar in nature difference with "a.b()" vs "(a.b)()"? If no, then why? If yes, then how to explain it better? (e.g. to Python novices). -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 8:08 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
https://docs.python.org/3/reference/expressions.html#operator-precedence ChrisA

Hello, On Tue, 15 Dec 2020 20:18:11 +1100 Chris Angelico <rosuav@gmail.com> wrote:
No worries, that table is not complete. For example, "," (comma) is a (context-dependent) operator in Python, yet that table doesn't have explicit entry for it. Unary "*" and "**" are other context-dependent operators. (Unary "@" too.)
ChrisA
-- Best regards, Paul mailto:pmiscml@gmail.com

On 15/12/20 11:28 pm, Paul Sokolovsky wrote:
Those things aren't considered to be operators. The term "operator" has a fairly specific meaning in Python -- it's not just "any punctuation mark". It's true that the operator precedence table won't tell you the precedence of everything in Python -- you need to consult the grammar for the rest. -- Greg

On Mon, Dec 14, 2020 at 01:09:56AM +0300, Paul Sokolovsky wrote:
Okay, I'll bite. Of course there is a difference: the first statement is ten characters long, the second is 12 characters long. Are you asking for a semantic difference (the two statements do something different) or an implementation difference (the two statements do the same thing in slightly different ways)? Implementation differences are fairly boring (to me): it might happen to be that some Python interpreters happen to compile the first statement into a slightly different set of byte codes to the second. I don't care too much about that, unless there are large performance (speed or memory) differences. For what it is worth, Python 1.5 generates the exact same byte code for both expressions; so does Python 3.9. However the byte code for 1.5 and for 3.9 are different. However, the semantics of the two expressions are more or less identical in all versions of Python, regardless of the byte-code used. (I say "more or less" because there may be subtle differences between versions, relating to the descriptor protocol, or lack there of, old and new style classes, attribute lookup, and handling of unbound methods.) [...]
At a suitable level of abstraction, there is no difference. The suitable level of abstraction is at the level of the Python execution model, where `(expression)` and `expression` mean the same, where the brackets are used for grouping.
That clearly has different semantics from the first two: it has the side-effect of binding a value to the name t. I'm not sure where you think this question is going to lead us. Wherever it is, I wish you would get to the point. Are you suggesting that we give a *semantic* difference to: ( expression ) compared to the unbracketed `expression`? -- Steve

Hello, On Mon, 14 Dec 2020 19:39:27 +1100 Steven D'Aprano <steve@pearwood.info> wrote:
Fair enough.
I'm asking for semantic difference, it's even in the post title. But the semantic difference in not in "what two statements do", but in "what two statements mean". Difference in "doing" is entailed by difference in "meaning". And there may be no difference in "doing", but still difference in "meaning", as the original "1+2+3" vs "1+(2+3)" example was intended to show.
Implementation differences are fairly boring (to me):
Right. How implementation points are brought into the discussion is to show the issue. As mentioned above, the actual progression is the opposite: first there's semantic meaning, then it's implemented. So, what's the semantic meaning of "a.b()" that it gets compiled with LOAD_METHOD bytecode?
Right, and the question is what semantic (not implementational!) shift happened in 3.7 (that's the point when it started to be compiled differently).
However, the semantics of the two expressions are more or less identical in all versions of Python, regardless of the byte-code used.
That's what I'd like to put under scrutiny.
Right, and exactly those "subtle differences" is what I'd like to discuss. I'd like to start however with more of abstract model of difference meaning, but afterwards, if the common ground is found, it would be interesting to check specific not-exactly-on-surface Python features which you list. []
The level of abstraction I'm talking about is where you look not just at "`(expression)` vs `expression`" but at: expression <op> expression vs expression <op> (expression) Where <op> is an arbitrary operator. As we already concluded, those do have difference, even considering such a simple operator as "+". So... what can we say about the difference between a.b() and (a.b)() then?
But that's yet another good argument to introduce block-level scoping to Python (in addition to already stated great arguments), because then, (a.b)() will be *exactly* equivalent to (inside a function): if 1: const t = a.b t() This neither gets affected by the surrounding environment (all variables introduced are new, regardless of their names), nor affects it (all variables are block-local, and not visible outside the block).
I'm not sure where you think this question is going to lead us. Wherever it is, I wish you would get to the point.
I'm sorry if this looks like a quiz, that's not the intention. But I really would like to see if other people can spot in this stuff what I spotted (after pondering about it), and I don't want to bias you in any way by jumping to my "conclusions". I do believe we'll get there, but then I don't want to be biased myself either. That's why it's step-by-step process, and I appreciate the people here are willing to walk it.
Hopefully, that was answered above. To end the post with the summary, I'm suggesting that there's difference between: expression <op> expression vs expression <op> (expression) Which is hopefully hard to disagree with. Then I'm asking, how consistent are we with understanding and appreciating that difference, taking the example of: a.b() vs (a.b)() (And it's not a purely language-lawyering question, it has practical consequences.) -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Dec 15, 2020 at 8:49 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Have you read the release notes? https://docs.python.org/3/whatsnew/3.7.html#optimizations Method calls are now up to 20% faster due to the bytecode changes which avoid creating bound method instances. (Contributed by Yury Selivanov and INADA Naoki in bpo-26110.) It is an *optimization*. There are NO semantic differences, other than the ones you're artificially creating in order to probe this. (I don't consider "the output of dis.dis()" to be a semantic difference.) Why do you keep bringing up irrelevant questions that involve order of operations? The opcode you're asking about is *just* an optimization for "look up this method, then immediately call it" that avoids the construction of a temporary bound method object. The parentheses are a COMPLETE red herring here. What is your point? ChrisA

On 15/12/20 10:49 pm, Paul Sokolovsky wrote:
There was no semantic shift. The change had *nothing* to do with semantics. It was *purely* an optimisation. I'm not sure what we can say to make this any clearer.
There is *sometimes* a difference, depending on exactly what the two expressions are, and what <op> is.
There is no inconsistency. Note also that: 1 + 2 * 3 is the same as 1 + (2 * 3) because the default order of operations already has * evaluated before +. The same kind of thing is happening with a.b() vs (a.b)(). -- Greg

On Tue, Dec 15, 2020 at 12:49:26PM +0300, Paul Sokolovsky wrote:
So far all you have talked about is implementation differences such as whether intermediate results are put on a stack or not, and differences in byte-code from one version of Python to another.
Right, so why are you wasting time talking about what they *do*, i.e. whether they put intermediate results on the stack, or in a register?
In the case of ints, there is no difference in meaning. For integers, addition is associative, and the order does not matter. So here you *say* you are talking about semantics, but you are actually talking about implementation. With integers, the semantics of all of these are precisely the same: 1 + 2 + 3 (1 + 2) + 3 1 + (2 + 3) 3 + (2 + 1) etc. The order in which you *do* the additions makes no difference to the semantics.
- Look up the name "a" in the local namespace; - look up the attribute name "b" according to a's MRO, including the use of the descriptor protocol, slots, etc; - call whatever object gets returned. [...]
Absolutely none. There was a semantic shift, but it was back in Python 2.2 when new style classes and the descriptor protocol were introduced.
Okay, the major semantic differences include: - in Python 1.5, attribute name look-ups call `__getattr__` if the name is not found in the object's MRO; - in Python 3.9, attribute name look-ups first call `__getattribute__` before checking the MRO and `__getattr__`; - the MRO is calculated differently between 1.5 and 3.9; - in 3.9, the descriptor protocol may be invoked; - descriptors and the descriptor protocol did not exist in 1.5; - there are a few differences in the possible types of `obj.meth` between the versions, e.g. Python 1.5 had both bound and unbound instance methods, while Python 3.9 does not. There may be other differences, but those are the most important ones I can remember.
See above.
That is an extremely straight-forward change in execution order. Whether that makes any semantic difference depends on whether the operations involved are associative or not.
There isn't one. Even though the `.` (dot) and `x()` (call) are not actual operators, we can treat them as pseudo-operators. According to Python's precedence rules, the first expression `a.b()` is the same as: - lookup name a - lookup attribute b - call and the second `(a.b)()` is: - lookup name a - lookup attribute b - call which is precisely the same. The parens make no semantic difference. Paul, I think you have fooled yourself by comparing two *different* situations. You compare a use of parens where they change the order of operations: a + b + c a + (b + c) but you should be looking at this: (a + b) + c # parens are redundant and make no difference That is exactly equivalent to your method example: (obj.meth)() # left pair of parens are redundant
You are changing the rules of the discussion as you go. You said nothing about a hypothetical new feature of block scopes and constants. You gave an example of existing Python code: t = obj.meth; t() There is no if block here, no new scope, no constants. t is a plain old ordinary local variabe in the current scope.
There may or may not be a difference, depending on the associativity and precedence rules involved.
Where as this example has only a single interpretation of the precedence: - lookup a in the current scope; - lookup attribute b on a; - call the result. There's no other order of operations available, so no way for the parens to change the order of operations: - you cannot call a.b until you have looked up b on a; - you cannot lookup b on a until you have looked up a. Let's be concrete: s = "hello world" s.upper() # returns "HELLO WORLD" There is only one possible order of operations: - lookup s; - lookup "upper" on s; - call the resulting method. You cannot use parens to change the order of operations: - call the resulting method first (what method?) - lookup "upper" on s (what's s?) - lastly lookup s (too late!) -- Steve

On 13/12/2020 22:09, Paul Sokolovsky wrote:
No. The value of an expression in parentheses is the value of the expression inside the parentheses, and in this case does not affect the order of evaluation.
The explanation is an optimisation introduced in 3.7 that the use of an intermediate variable prevents. The compiler applies it when it can see the only use of the attribute is an immediately following call. Having burrowed into the implementation, I'm certain it tries hard to be indistinguishable from the unoptimised implementation (and succeeds I think), even to the point of failing in the same way when that is the programmed outcome. LOAD_METHOD goes far enough down the execution of LOAD_ATTR to be sure the bound object would be a types.MethodType containing a pair of pointers that CALL_FUNCTION would have to unpack, and pushes the pointers on the stack instead of creating the new object. Otherwise it completes LOAD_ATTR and pushes a bound object and a NULL, which is what CALL_METHOD uses to decide which case it is dealing with. The meaning of the code is what it does detectably in Python, not what it compiles to (for some definition of "detectably" that doesn't include disassembly). Jeff Allen

Hello, On Tue, 15 Dec 2020 08:25:25 +0000 Jeff Allen <ja.py@farowl.co.uk> wrote:
You're on the right track. (Well, I mean you're on the same track as me.) So, what's the order of evaluation and what's being evaluated at all?
Right. But I would suggest us rising up in abstraction level a bit, and think not in terms of "intermediate variables" but in terms of "intermediate storage locations". More details in my today's reply to Chris Angelico.
You're now just a step away from the "right answer". Will you make it? I did. And sorry, the whole point of the discussion if to see if the whole path, each step on it, and the final answer is as unavoidable as I now imagine them to be, so I can't push you towards it ;-).
Having burrowed into the implementation,
Great! As I mentioned in the other replies, I brought implementation matters (disassembly) to represent the matter better. But the proper way is to start with semantics, and then consider how to implement it (and those considerations can have feedback effect on desired semantics too of course). So, regardless of whether it was done like that or not in that case (when LOAD_METHOD was introduced), let's think what semantics [could have] lead to LOAD_METHOD vs LOAD_ATTR implementation? []
Jeff Allen
-- Best regards, Paul mailto:pmiscml@gmail.com

On 15/12/20 11:16 pm, Paul Sokolovsky wrote:
The fact that it's a *named* intermediate storage location is important, because it means the programmer can see it, and will expect it to hold a bound method. So the compiler can't optimise away the bound method creation in that case. Well, it could if it could prove that the intermediate value isn't used for anything else subsequently, but that seems like a very rare thing for anyone to do. Why bother naming it if you're only going to call it once and then throw it away? So the compiler only bothers with the most common case.
You're now just a step away from the "right answer". Will you make it?
I'll be interested to find out what you think the "right" answer is. Or what the actual question is, for that matter -- that's still not entirely clear. -- Greg

On Tue, Dec 15, 2020 at 01:16:21PM +0300, Paul Sokolovsky wrote:
You're now just a step away from the "right answer". Will you make it? I did.
Sorry Paul, but you didn't. You fooled yourself by comparing chalk and cheese, and imagining that because you can eat cheese (change the order of operation by using parens, which is a semantic difference), you can also eat chalk (imagine a semantic difference between `obj.method` and `(obj.method)`). Your mistake was comparing `(obj.method)()` with `a + (b + c)` when you should have compared it to `(a + b) + c`. Not every source code difference has a semantic difference: x = 1 x=1 x = 1 x = 1 or None x = (1) all mean the same thing. Putting parens around the final (1) changes nothing. Let's get away from using round brackets for function call, because it clashes with the use of round brackets for grouping. All we really need is a *postfix unary operator*. x() # the brackets are "postfix unary zero-argument function call" Let's use the factorial operator, "bang" `!` instead. obj.attr! has to be evaluated from left to right under Python's rules. You can't apply the factorial operator until you have looked up attr on obj, and you cannot lookup attr until you have looked up obj. So the only possible order of operations is left to right. This is not meaningful: # attr factorial first, then lookup obj, then the dot lookup obj.(attr!) but while this is meaningful, the order of operations is unchanged: # lookup obj first, then dot lookup attr, then factorial (obj.attr)! Replace the factorial postfix operator with the `()` call operator, and the logic remains the same. -- Steve

Hello, On Tue, 15 Dec 2020 23:28:53 +1100 Steven D'Aprano <steve@pearwood.info> wrote:
But that's not what I was talking about. I was talking about difference between `obj.method()` and `(obj.method)()`, but you in your recent emails keep reducing that to `obj.method` and `(obj.method)`. Sorry, but you can't just drop those parens at the end, they have the specific meaning (a call). And I initially optimized presentation for number of characters with examples like "a.b()", but perhaps I should have written "a.b(foo, bar, baz)", so the "(foo, bar, baz)" part was big and pronounced and there was no desire to drop it silently.
Your mistake was comparing `(obj.method)()` with `a + (b + c)` when you should have compared it to `(a + b) + c`.
No, the comparison was to show that placing parens *does* already change semantic meaning of operator sequences. The example could only confuse you if you try to compare chalk and cheese, sorry, "+" and method call, directly. They behave in regard to parens *similarly*, but not *exactly*, how you might have thought.
Yeah, but imagine if parens were put like: x(= 1). That would completely change the meaning. Like, it would become SyntaxError in current Python, but that actually means we could assign a new meaning to it! Indeed, how many proposals to use up that syntax we had here on python-ideas? 1, 2, 3? Zero you say? Oh, I'm sure someone will accept the challenge ;-). For example, I believe we had proposals for syntax like x(a=) already, no?
Let's get away from using round brackets for function call, because it clashes with the use of round brackets for grouping.
I wouldn't say they "clash". They are completely disambiguated syntax-wise. But based on the discussion we have here, it's fair to say that some people get confused seeing parens with different meaning together, for example you keep dropping one of the parens pair ;-).
All we really need is a *postfix unary operator*.
Warm! I'd even say "hot", but I know it won't click, even with the following clarifications: 1. Function call is not a postfix unary operator. It's binary operator. It's syntax is: expr(args) 2. It's not even an infix operator, like for example "+". You can't really ignore that closing paren - without it, the syntax is invalid. So, function call operator, "()" is such a funky operator which is "spread around" the new expression it forms.
Sounds good, for as long as they're separate operators. But look at the conditional Python operator: foo if bar else baz Getting warmer, no?
No necessarily. Some operators are less simple than the others. Let the conditional operator be the witness.
-- Steve
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On 2020-12-15 5:16 a.m., Paul Sokolovsky wrote:
Oh, that was the point of the discussion? Wonderful then, I can easily answer. Considering that, over multiple days of discussion, literally nobody came to the same conclusion that you did, then it's obvious that the while path, each step on it, and the final answer are NOT as unavoidable as you imagine them to be. I think we can consider this case closed Alexandre Brault

Hello, On Thu, 17 Dec 2020 15:28:47 -0500 Alexandre Brault <abrault@mapgears.com> wrote:
Right, that means the spec for "strict mode, part 2" will need to be as long and detailed as that for "strict mode, part 1", and collect together all the stuff which was discussed here over these multiple days. Whereas if other people would have come to that conclusion, it could be much shorter and faster to write.
I think we can consider this case closed
Alexandre Brault
-- Best regards, Paul mailto:pmiscml@gmail.com
participants (12)
-
Alexandre Brault
-
Chris Angelico
-
Christopher Barker
-
David Mertz
-
Ethan Furman
-
Greg Ewing
-
Jeff Allen
-
Marco Sulla
-
Paul Sokolovsky
-
Serhiy Storchaka
-
Stestagg
-
Steven D'Aprano