On 8 October 2012 00:36, Guido van Rossum
On Sun, Oct 7, 2012 at 3:43 PM, Oscar Benjamin
wrote: I think what Serhiy is saying is that although pep 380 mainly discusses generator functions it has effectively changed the definition of what it means to be an iterator for all iterators: previously an iterator was just something that yielded values but now it also returns a value. Since the meaning of an iterator has changed, functions that work with iterators need to be updated.
I think there are different philosophical viewpoints possible on that issue. My own perspective is that there is no change in the definition of iterator -- only in the definition of generator. Note that the *ability* to attach a value to StopIteration is not new at all.
I guess I'm viewing it from the perspective that an ordinary iterator is simply an iterator that happens to return None just like a function that doesn't bother to return anything. If I understand correctly, though, it is possible for any iterator to return a value that yield from would propagate, so the feature (returning a value) is not specific to generators.
This feature was new in Python 3.3 which was released a week ago
It's been in alpha/beta/candidate for a long time, and PEP 380 was first discussed in 2009.
so it is not widely used but it has uses that are not anything to do with coroutines.
Yes, as a shortcut for "for x in <iterator>: yield x". Note that the for-loop ignores the value in the StopIteration -- would you want to change that too?
Not really. I thought about how it could be changed. Once APIs are available that use this feature to communicate important information, use cases will arise for using the same APIs outside of a coroutine context. I'm not really sure how you could get the value from a for loop. I guess it would have to be tied to the else clause in some way.
As an example of how you could use it, consider parsing a file that can contains #include statements. When the #include statement is encountered we need to insert the contents of the included file. This is easy to do with a recursive generator. The example uses the return value of the generator to keep track of which line is being parsed in relation to the flattened output file:
def parse(filename, output_lineno=0): with open(filename) as fin: for input_lineno, line in enumerate(fin): if line.startswith('#include '): subfilename = line.split()[1] output_lineno = yield from parse(subfilename, output_lineno) else: try: yield parse_line(line) except ParseLineError: raise ParseError(filename, input_lineno, output_lineno) output_lineno += 1 return output_lineno
Hm. This example looks constructed to prove your point... It would be easier to count the output lines in the caller. Or you could use a class to hold that state. I think it's just a bad habit to start using the return value for this purpose. Please use the same approach as you would before 3.3, using "yield from" just as the shortcut I mentione above.
I'll admit that the example is contrived but it's to think about how to use the new feature rather than to prove a point (Otherwise I would have contrived a reason for wanting to use filter()). I just wanted to demonstrate that people can (and will) use this outside of a coroutine context. Also I envisage something like this being a common use case. The 'yield from' expression can only provide information to its immediate caller by returning a value attached to StopIteration or be raising a different type of exception. There will be many cases where people want to get some information about what was yielded/done by 'yield from' at the point where it is used. Oscar