Chris Angelico writes:
While it's possible to avoid some of those, we can't hold the language back because *someone who doesn't understand* might misunderstand.
Opposing the proposal wasn't the point of quoting Steve, the point was to provide IMO improved language in case the proposal gets adopted. So far, I oppose this proposal because I don't need it, I don't see that anybody else needs it *enough to add syntax*, and I especially don't see that anybody else needs it enough to add syntax that in the opinion of some folks who know this stuff way better than me *might get in the way of a more general feature in the nearish future*. (I know you differ on that point and I understand why, I just don't yet agree.) None of that has anything to do with user misunderstanding, it's a different assessment of the benefits and costs of adoption.
Resolving a promise to an object on first reference would break that pattern completely, so I would expect most people to assume that "each time it is needed" means "each time you omit the argument".
Back to the educational issues: We're not discussing most people. I understand that the new syntax is well-defined and not hard to understand. To me, most people aren't an educational issue. We're discussing programmers new to all patterns Pythonic. You're already breaking the pattern of immediate evaluation (if they understand it), and the example of generators (well-understood before default arguments are?) shows that objects that appear to be defined may be evaluated when called for. Come to think of it, I don't think next() is the comparable point, it's when you call the generator function to get an iterable object. IMO, that's a better argument for your point of view.
And that's what happens when you need to be pedantically correct. Not particularly useful to a novice, especially with the FUD at the end.
I have no idea what you mean by "that's what happens," except that apparently you don't like it. As I see it, a novice will know what a function definition is, and where it is in her code. She will know what a function call is, and where it is in her code. She will have some idea of the order in which things "get done" (in Python, "executed", but she may or may not understand that def is an executable statement). She can see the items in question, or see that they're not there when the argument is defaulted. To me, that concrete explanation in terms of the code on the screen will be more useful *to the novice* than the more abstract "when needed". As for the "FUD", are you implying that you agree with Steve's proposed text? So that if the programmer is unsure, it's perfectly OK to use early binding, no bugs there?
There's a performance cost to late binding when the result will always be the same. The compiler could optimize "=>constant" to "=constant" at compilation time [1], but if it's not actually a compile-time constant, only a human can know that that can be done.
We're talking about folks new to the late-binding syntax and probably to Python who are in doubt. I don't think performance over correctness is what we want to emphasize here.
But then the question becomes: should we recommend late-binding by default, with early-binding as an easy optimization? I'm disinclined to choose at this point, and will leave that up to educators and style guide authors.
I think most educators will go with "when in doubt, ask a mentor if available, or study harder if not -- anyway, you'll get it soon, it's not that hard", or perhaps offer a much longer paragraph of concrete advice. Style guide authors should not touch it, because it's not a matter of style when either choice is a potential bug. And that's why I think the benefits are basically limited to introspecting the deferred object, whether it's a special deferred object or an eval-able equivalent string. The choice is unpleasant for proponents: if you choose a string, Eric's "but then we have to support strings even if we get something better" is a criticism, and if you choose a descriptor-like protocol, David's criticism that it can and should be more general comes to bear. You say that there's a deficiency with a generic deferred, in that in your plan late-bound argument expressions can access nonlocals -- but that's not a problem if the deferred is implemented as a closure. (This is an extension of a suggestion by Steven d'Aprano.) I would expect that anyway if generic deferreds are implemented, since they would be objects that could be returned from functions and "ejected" from the defining namespace. It seems to me that access to nonlocals is also a problem for functions with late-bound argument defaults, if such a function is returned as the value of the defining function. I suppose it would have to be solved in the same way, by making that function a closure. So the difference is whether the closure is in the default itself, or in the function where the default is defined. But the basic issue, and its solution, is the same. That might be "un-Pythonic" for deferreds, but at the moment I can't see why it's more un-Pythonic for deferreds than for local functions as return values. Regards, Steve