On 15 October 2017 at 06:43, Nick Coghlan email@example.com wrote:
On 15 October 2017 at 15:05, Guido van Rossum firstname.lastname@example.org wrote: >
I would like to reboot this discussion (again). It feels to me we're getting farther and farther from solving any of the problems we might solve.
I think we need to give up on doing anything about generators; the use
cases point in too many conflicting directions. So we should keep the
semantics there, and if you don't want your numeric or decimal context to
leak out of a generator, don't put
with. (Yury and
have both remarked that this is not a problem in practice, given that there
are no bug reports or StackOverflow questions about this topic.)
Let me have another go at building up the PEP 550 generator argument from first principles.
The behaviour that PEP 550 says shouldn't change is the semantic equivalence of the following code:
# Iterator form class ResultsIterator: def __init__(self, data): self._itr = iter(data) def __next__(self): return calculate_result(next(self._itr)) results = _ResultsIterator(data) # Generator form def _results_gen(data): for item in data: yield calculate_result(item) results = _results_gen(data)
This had been non-controversial until recently, and I still don't understand why folks suddenly decided we should bring it into question by proposing that generators should start implicitly capturing state at creation time just because it's technically possible for them to do so (yes we can implicitly change the way all generators work, but no, we can't implicitly change the way all iterators work).
This is non-controversial to me.
The behaviour that PEP 550 thinks should change is for the following code to become roughly semantically equivalent, given the constraint that the context manager involved either doesn't manipulate any shared state at all (already supported), or else only manipulates context variables (the new part that PEP 550 adds):
# Iterator form class ResultsIterator: def __init__(self, data): self._itr = iter(data) def __next__(self): with adjusted_context(): return calculate_result(next(self._itr)) results = _ResultsIterator(data) # Generator form def _results_gen(data): for item in data: with adjusted_context(): yield calculate_result(item) results = _results_gen(data)
Today, while these two forms look like they should be comparable, they're not especially close to being semantically equivalent, as there's no mechanism that allows for implicit context reversion at the yield point in the generator form.
I'll have to take your word for this, as I can't think of an actual example that follows the pattern of your abstract description, for which I can immediately see the difference.
In the absence of being able to understand why the difference matters in current code, I have no view on whether PEP 550 needs to "fix" this issue.
While I think PEP 550 would still be usable without fixing this discrepancy, I'd be thoroughly disappointed if the only reason we decided not to do it was because we couldn't clearly articulate the difference in reasoning between:
I think that if we can't describe the problem that makes it obvious to the average Python user, then that implies it's a corner case that's irrelevant to said average Python user - and so I'd consider fixing it to be low priority. Specifically, a lot lower priority than providing a context variable facility - which while still not a common need, at least resonates with the average user in the sense of "I can imagine writing code that needed context like Decimal does".
(And apologies for presenting an imagined viewpoint as what "the average user" might think...)