On Thu, Dec 9, 2021 at 5:54 PM Brendan Barnwell <brenbarn@brenbarn.net> wrote:
On 2021-12-08 20:36, Chris Angelico wrote:
Remember, though: The comparison should be to a function that looks like this:
def f(a=[], b=_SENTINEL1, c=_SENTINEL2, d=_SENTINEL3): if b is _SENTINEL1: b = {} if c is _SENTINEL2: c = some_function(a, b) if d is _SENTINEL3: d = other_function(a, b, c)
If you find the long-hand form more readable, use the long-hand form! It's not going away. But the introspectability is no better or worse for these two. The late-bound defaults "{}", "some_function(a, b)", and "other_function(a, b, c)" do not exist as objects here. Using PEP 671's syntax, they would at least exist as string constants, allowing you to visually see what would happen (and, for instance, see that in help() and inspect.signature).
I don't want to get bogged down in terminology but I am becoming increasingly frustrated by you using the term "default" both for things that are values and things that are not, as if there is no difference between them.
That's absolutely correct: I am using the term "default" for anything that provides a default for an optional argument that was omitted. In some cases, they are default values. In other cases, they are default expressions. If your docstring says "omitting d will use the length of a", then the default for d is len(a).
There are no late-bound defaults here, in the sense that I mean, which as I said before has to do with default VALUES. There is just code in the function body that does stuff. I am fine with code in a function body doing stuff, but that is the purview of the function and not the argument. An individual ARGUMENT having a default VALUE is not the same as the FUNCTION defining BEHAVIOR to deal with a missing value for an argument.
In a technical sense, the default value for b is _SENTINEL1, but would you describe that in the docstring, or would you say that omitting b would use a new empty dictionary? You're getting bogged down, not in terminology, but in mechanics. At an abstract level, the default for that argument is whatever would be used if the argument is omitted.
Your discussion of this point (as I interpret it, at least) continues to take it for granted that it is perfectly fine to move stuff between the function signature and the function body and those are somehow the same thing. They are not. Currently in Python there is nothing that can be used as a default argument this way:
def f(a=<some expression here>):
. . .that cannot also be done this way:
obj = <some expression here> def f(a=obj):
That is correct, and that is a current limitation. Is it a fundamental? Up until very recently, there was nothing in Python that could be used here: obj = <some expression here> that would also have the effect of: x = 42 Was that a fundamental limitation? It changed. A feature was added, and what had previously been impossible became possible. If a feature is rejected simply because it makes something possible that previously wasn't, then no proposal should ever be accepted. To justify this, please explain WHY it is so important for defaults to all be objects. Not just "that's how they are now", but why that is an important feature.
If you want to change that, okay, but I feel in the PEP and your discussion of it you are not fully acknowledging that this is actually how functions work in Python now, and thus the PEP would break some existing assumptions and bring about a nontrivial change in how code can be refactored.
I'm definitely changing how functions work. Otherwise I wouldn't be proposing anything. Yes, I want it to be possible for argument defaults to no longer be objects. I consider this a feature, NOT a flaw. The counter-argument has always just been "but that's how it is".
(This again may be why there is disagreement about "how confusing" the proposed change would be. Part of what I mean by "confusing" is that it requires changing assumptions like the one I mentioned.)
Right. And I put it to you that it won't actually be very confusing after all. The most common cases will simply behave as expected - in fact, they will behave MORE as expected than they currently do.
I'll also point out that you misread my original example, which was:
def f(a=[], b@={}, c=some_function(a, b), d@=other_function(a, b, c)):
Note that c is early-bound here, whereas in your example you have changed it to "late-bound" (aka "behavior inside the function"). I realize this was probably just a thinko, but perhaps it also gently illustrates my point that peril lies in allowing early and late-bound defaults to mix within the same signature. It's not always trivial to remember which arguments are which. :-)
Oh, sorry, I didn't know whether that was a typo on your part or a mistake on mine. With synthetic examples like this, it's not always easy to tell. In real code, that would be unlikely to be an issue (and honestly, the distinction between a and b here would be highly unusual in a single function). There is one small aspect of mixing that has some technical consequences, but I'm declaring it to be undefined behaviour: currently, the late-bound default for b would be allowed to refer to c, and it would succeed. Similarly, b's default could refer to d, but only if d is provided by the caller, and not if it's using its default. (Otherwise you'd get UnboundLocalError.) But I am absolutely okay with a future change, or a different Python implementation, declaring those to be errors. Other than that, the rule is simple: parameters get initialized from left to right. ChrisA