[Tim]
... So long as I'm the only one looking at real-life use cases, mine is the only evidence I care about ;-) I don't really care about contrived examples, unless they illustrate that a proposal is ill-defined, impossible to implement as intended, or likely to have malignant unintended consequences out-weighing their benefits.
[Brendan Barnwell
You keep saying things like this with a smiley, and I realize you know what you're talking about (much more than I do), but I'd just like to push back a bit against that entire concept.
I'm not so keen on meta-discussions either ;-)
Number one, I think many people have been bringing in real life uses cases.
Keep in mind the context here: _this_ thread is specifically about listcomps and genexps. I agree there have been tons of use cases presented for statement-oriented applications (some positive for the feature, some negative), but not so much for listcomps and genexps. It's worth noting again that "the" use case that started all this long ago was a listcomp that the current PEP points out still "won't work": total = 0 progressive_sums = [total := total + value for value in data] It's obvious what that's intended to do. It's not obvious why it blows up. It's a question of scope, and the scopes of names in synthesized functions is a thoroughly legitimate thing to question. The suggestion made in the first message of this thread was the obvious scope change needed to make that example work, although I was motivated by looking at _other_ listcomp/genexp use cases. They wanted the same scope decision as the example above. But I didn't realize that the example above was essentially the same thing until after I made the suggestion.
Number two, I disagree with the idea that looking at individual use cases and ignoring logical argumentation is the way to go.
Fine, then you argue, and I'll look at use cases ;-) Seriously, I don't at all ignore argument - but, yes, arguments are secondary to me. I don't give a rip about how elegant something is if it turns out to be unusable. Conversely, I don't _much_ care about how "usable" something is if the mental model for how it works is inexplicable.
The problem with it is that a lot of the thorny issues arise in unanticipated interactions between constructs that were designed to handle separate use cases.
Sure.
I also do not think it's appropriate to say "if it turns out there's a weird interaction between two features, then just don't use those two things together".
Sometimes it is, sometimes it isn't. For example, code using threads has to be aware of literal mountains of other features that may not work well (or at all) in a multi-threaded environment without major rewriting. Something as simple as "count += 1" may fail in mysterious ways otherwise. So it goes. But note that this is easily demonstrated by realistic code.
One of the great things about Python's design is that it doesn't just make it easy for us to write good code, but in many ways makes it difficult for us to write bad code.
That one I disagree with. It's very easy to write bad code in every language I'm aware of. It's just that Python programmers are too enlightened to approve of doing so ;-)
It is absolutely a good idea to think of the broad range of wacky things that COULD be done with a feature,
So present some!
not just the small range of things in the focal area of its intended use. We may indeed decide that some of the wacky cases are so unlikely that we're willing to accept them, but we can only decide that after we consider them. You seem to be suggesting that we shouldn't even bother thinking about such corner cases at all, which I think is a dangerous mistake.
To the contrary, bring 'em on. But there is no feature in Python you can't make "look bad" by contriving examples, from two-page regular expressions to `if` statements nested 16 deep. "But no sane person would do that" is usually - but not always - "refutation" enough for such stuff.
Taking the approach of "this individual use case justifies this individual feature", leads to things like JavaScript, a hellhole of special cases, unintended consequences, and incoherence between different corners of the language. There are real cognitive benefits to having language features make logical and conceptual sense IN ADDITION TO having practical utility, and fit together into a unified whole.
I haven't ignored that here. The scope rule for synthesized functions implementing regexps and listcomps _today_ is: The names local to that function are the names appearing as `for` targets. All other names resolve to the same scopes they resolve to in the block containing the synthesized function. The scope rule if the suggestion is adopted? The same, along with that a name appearing as a ":=" target establishes that the name is local to the containing block _if_ that name is otherwise unknown in the containing block. There's nothing incoherent or illogical about that, provided that you understand how Python scoping works at all. It's not, e.g., adding any _new_ concept of "scope" - just spelling out what the intended scopes are. Of course it's worth noting that the scope decision made for ";=" targets in listcomps/genexps differs from the decision made for `for` target names. It's use cases that decide, for me, whether that's "the tail" or "the dog". Look again at the `progressive_sums` example above, and tell me whether _you'll_ be astonished if it works. Then are you astonished that
x = 0 ignore = [x := 1] x 1
displays 1? Either way, are you astonished that
x = 0 ignore = [x := 1 for i in range(1)] x 1
also displays 1? If you want to argue about "logical and conceptual sense", I believe you'll get lost in abstractions unless you _apply_ your theories to realistic examples.
Personally my feeling on this whole thread is that these changes, if implemented are likely to decrease the average readability of Python code, and I don't see the benefits as being worth the added complexity.
Of course consensus will never be reached. That's why Guido is paid riches beyond the dreams of avarice ;-)