[Python-ideas] Revised**5 PEP on yield-from
greg.ewing at canterbury.ac.nz
Mon Mar 2 05:09:24 CET 2009
Jacob Holm wrote:
> Did you read the spelled-out version at the bottom? No need to "break
> out" of anything. That happens automatically because of the "yield
> from". Just a few well-placed calls to next...
Yes, and I tried running it on my prototype implementation.
It gives exactly the results you suggested, and any further
next() calls on a or b raise StopIteration.
You're right that there's no need to break out early in the
case of generators, since it seems they just continue to
raise StopIteration if you call next() on them after they've
finished. Other kinds of iterators might not be so forgiving.
> I am not worried about R running out, each of A and B would find out
> about that next time they tried to get a value. I *am* worried about R
> doing a yield-from to X (the xrange in this example) which then needs to
> appear in both stacks to get the expected behavior from the PEP.
What's wrong with it appearing in both stacks, though,
as long as it gives the right result?
> I am saying that what the PEP currently specifies is not quite so simple
> to speed up as you and Arnaud seem to think.
> (Even with a simple stack, handling 'close' and 'StopIteration'
> correctly is not exactly trivial)
It's a bit tricky to handle all the cases correctly, but
that has nothing to do with speed.
I should perhaps point out that neither of my suggested
implementations actually use separate stacks like this.
The data structure is really more like the shared stack
you suggest, except that it's accessed from the "bottoms"
rather than the "top".
This is only a speed issue if the time taken to find the
"top" starting from one of the "bottoms" is a significant
component of the total running time. My conjecture is that
it won't be, especially if you do it iteratively in a
tight C loop.
Some timing experiments I did suggest that the current
implementation (which finds the "top" using recursion in C
rather than iteration) is at least 20 times faster at
delegating a next() call than using a for-loop, which is
already a useful improvement, and the iterative method
can only make it better.
So until someone demonstrates that the simple algorithm
I'm using is too slow in practice, I don't see much point
in trying to find a smarter one.
More information about the Python-ideas