@Chris My bottom line, as I wrote before, is that even if this were introduced, I probably will continue to default to def foo(arg=None): if arg is None: arg = default in my own code until I start seeing "def foo(arg=>default)" in a lot of code I read. Since Mailman generally supports about 4 Python versions, that means I won't see it in Mailman until 2027 or so. But I'm not George Bush to say "Read my lips: no new (syn)taxes!" Unless somebody comes up with some new really interesting use case, I think the suggestion somebody (sorry to somebody!) made earlier to "Just Do It" and submit to the SC is the right one. Both David and I are convinced that there is value-added in late binding for new mutables and defaults that are computed from actual arguments, even if we're not convinced it's enough. The proposal has plenty of fans, who *are* convinced and *will* use it. I don't see a prospect for that new really interesting use case, at least not here on Python-Ideas, the discussion is just variations on the same themes. On the other hand, a PEP under consideration may get a little more interest from the Python-Dev crowd, and obviously the SC itself. They may have use cases or other improvements to offer. "Now is better than never." The SC will let you know if the companion koan is applicable. ;-) @Chris You may or may not want to read my variations on the themes. ;-) Chris Angelico writes:
In the numeric stuff, if I have:
newarray = (A @ B) | (C / D) + (E - F)
That's @, |, /, +, and -. So 5 operators, and 25 "complexity points". If I added one more operator, 36 "complexity points" seems reasonable. And if I removed one of those operators, 16 "complexity points" feels about right.
For my part, I would say that it's quite the opposite. This is three parenthesized tokens, each of which contains two things combined in a particular way. That's six 'things' combined in particular ways. Cognitive load is very close to this version:
newarray = (A * B) + (C * D) + (E * F)
I don't have the studies offhand, but "7 plus or minus 2" is famous enough, google that and you'll find plenty. I'll bet you even find "cognitive complexity of mathematical formulae" in the education literature. (And if not, we should sue all the Departments of Education in the world for fraud. ;-) I do have the words: "this is a sum of binary products". This basically reduces the cognitive complexity to two concepts plus a scan of the list of variables. Given that they're actually in alphabetical order, "first variable is A" is enough to reproduce the expression. That's much simpler than trying to describe David's 5-operator case with any degree of specificity. Or even just try to reproduce his formula without a lot of effort to memorize it! Also, just from the regularity of the form and its expression as an algebraic formula, I can deduce that almost certainly A, C, and E have the same type, and B, D, and F have the same type, and very likely those two types are the same. Not so for the five-operator case, where I would be surprised if less than 3 types were involved. Of course, this type information is probably redundant. I probably remember not only the types, but lots of other attributes of A through F. But this kind of redundancy is good! It reinforces my understanding of the expression and the program that surrounds it.
even though this uses a mere two operators. It's slightly more, but not multiplicatively so. (The exact number of "complexity points" will depend on what A through F represent, but the difference between "all multiplying and adding" and "five distinct operators" is only about three points.)
That may be true for you, but it's definitely not true for my economics graduate students.
Sure, knowing what `hi` defaults to *could be useful*. I'm sure if I used that function I would often want to know... and also often just assume the default is "something sensible." I just don't think that "could be useful" as a benefit is nearly as valuable as the cost of a new sigil and a new semantics adding to the cognitive load of Python.
Yes, but "something sensible" could be "len(stuff)", "len(stuff)-1", or various other things. Knowing exactly which of those will tell you exactly how to use the function.
@David: I find the "hi=len(stuff)" along with the "lst=[]" examples fairly persuasive (maybe moves me to +/- 0). @Chris: It would be a lot more persuasive if you had a plausible explicit list of "various other things". Even "len(stuff) - 1" is kind of implausible, given Python's consistent 0-based indexing and closed-open ranges (yeah, I know some people like to use the largest value in the range rather than the least upper bound not in the range, but I consider that bad style in Python, and they denote the same semantics). And "len(stuff)" itself is "the obvious" default. How often is the computed default either "unobvious" or "has multiple frequently useful values"?
If you accept that showing "lo=0" gives useful information beyond simply that lo is optional, then is it so hard to accept that "hi=>len(stuff)" is also immensely valuable?
"lo=0" is not only useful, it's also very close to the minimal notation for "lo is optional" (and in fact my bet is that Python would express mere optionality with a keyword such as opt, as in "bisect(stuff, opt lo, opt hi)", which makes "lo=0" shorter than a very plausible alternative).
For example, it also "could be useful" to have syntax that indicated the (expected) big-O complexity of that function.
@David: I think this is a bit sophistical. Big-O is occasionally relevant to the decision to use a function. But when it is, it's a very big deal indeed, which justifies at least the amount of effort to read the source, identify the algorithm, and look it up in Knuth :-). All you've really proven is that there is some information of positive value but infrequently useful enough that nobody in this thread would favor adding syntax for it. Chris's case is that the value of a defaulted argument is something you need to know *every* time you use many functions with defaults. That's considerable, even if we are going to end up concluding "not worth syntax" or maybe "not quite enough with the proposed syntax, keep trying".
Let's look at a function that has a lot of late-bound default arguments:
pd.read_csv(
52 (!) arguments omitted.
I'd have to look through the implementation, but my guess is that quite a few of the 25 late-bound defaults require calculations to set that take more than one line of code. I really don't WANT to know more than "this parameter is calculated according to some logic, perhaps complex logic" ... well, unless I think it pertains to something I genuinely want to configure, in which case I'll read the docs.
I agree, and I don't think this "don't care" attitude for defaultable parameters is limited to this example or even this kind of example (shall we call it an "Alice's Restaurant parameter list"?)
Actually, I would guess that most of these default to something that's set elsewhere.
This function isn't a good showcase of PEP 671 - neither its strengths nor its weaknesses
I agree with the "set elsewhere" guess, but I think that significantly *reduces* the number of cases where a late-bound default provides a substantial improvement over None.