Re: [Python-Dev] Coroutines, generators, function calling

On Tue, 2005-10-18 at 09:07 -0400, Jim Jewett wrote:
Suppose now I want to move the window animation to a function, to factorize the code:
def animate(win, steps): for y in steps: win.move(0, y*20) yield Timeout(0.1)
def show_message(msg): win = create_window(msg) animate(win, xrange(10)) # slide down yield Timeout(3) animate(win, xrange(10, 0, -1)) # slide up win.destroy()
This obviously doesn't work, because calling animate() produces another generator, instead of calling the function. In coroutines context, it's like it produces another coroutine, while all I wanted was to call a function.
I don't suppose there could be a way to make the yield inside the subfunction have the same effect as if it was inside the function that called it? Perhaps some special notation, either at function calling or at function definition?
---------------------------------
I may be missing something, but to me the answer looks like:
def show_message(msg): win = create_window(msg) for v in animate(win, xrange(10)): # slide down yield v yield Timeout(3) for v in animate(win, xrange(10, 0, -1)): # slide up yield v win.destroy()
Sure, that would work. Or even this, if the scheduler would automatically recognize generator objects being yielded and so would run the the nested coroutine until finish: def show_message(msg): win = create_window(msg) yield animate(win, xrange(10)) # slide down yield Timeout(3) yield animate(win, xrange(10, 0, -1)) # slide up win.destroy() Sure, it could work, but still... I wish for something that would avoid creating a nested coroutine. Maybe I'm asking for too much, I don't know. Just trying to get some feedback... Regards. -- Gustavo J. A. M. Carneiro <gjc@inescporto.pt> <gustavo@users.sourceforge.net> The universe is always one step beyond logic.

Sure, that would work. Or even this, if the scheduler would automatically recognize generator objects being yielded and so would run the the nested coroutine until finish:
This idea has been discussed before. I think the problem with recognizing generators as the subject of "yield" statements is that then you can't yield a generator even if you want to. The best syntax I can think of without adding a new keyword looks like this: yield from x which would be equivalent to for i in x: yield i Note that this equivalence would imply that x can be any iterable, not just a generator. For example: yield from ['Hello', 'world'] would be equivalent to yield 'Hello' yield 'world'

Andrew Koenig wrote:
Sure, that would work. Or even this, if the scheduler would automatically recognize generator objects being yielded and so would run the the nested coroutine until finish:
This idea has been discussed before. I think the problem with recognizing generators as the subject of "yield" statements is that then you can't yield a generator even if you want to.
The best syntax I can think of without adding a new keyword looks like this:
yield from x
which would be equivalent to
for i in x: yield i
Note that this equivalence would imply that x can be any iterable, not just a generator. For example:
yield from ['Hello', 'world']
would be equivalent to
yield 'Hello' yield 'world'
Hmm, I actually quite like that. The best I came up with was "yield for", and that just didn't read correctly. Whereas "yield from seq" says exactly what it is doing. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://boredomandlaziness.blogspot.com

Andrew Koenig wrote:
Sure, that would work. Or even this, if the scheduler would automatically recognize generator objects being yielded and so would run the the nested coroutine until finish:
This idea has been discussed before. I think the problem with recognizing generators as the subject of "yield" statements is that then you can't yield a generator even if you want to.
The best syntax I can think of without adding a new keyword looks like this:
yield from x
which would be equivalent to
for i in x: yield i
My eyes really like the syntax, but I wonder about it's usefulness. In rdflib, particularly here: http://svn.rdflib.net/trunk/rdflib/backends/IOMemory.py We yield values from inside for loops all over the place, but the yielded value is very rarely just the index value (only 1 of 14 yields) , but something calculated from the index value, so the new syntax would not be useful, unless it was something that provided access to the index item as a variable, like: yield foo(i) for i in x which barely saves you anything (a colon, a newline, and an indent). (hey wait, isn't that a generator comprehension? Haven't really encountered those yet). Of course rdflib could be the minority case and most folks who yield in loops are yielding only the index value directly. off to read the generator comprehension docs... -Michel

We yield values from inside for loops all over the place, but the yielded value is very rarely just the index value (only 1 of 14 yields) , but something calculated from the index value, so the new syntax would not be useful, unless it was something that provided access to the index item as a variable, like:
yield foo(i) for i in x
which barely saves you anything (a colon, a newline, and an indent). (hey wait, isn't that a generator comprehension?
Here's a use case: def preorder(tree): if tree: yield tree yield from preorder(tree.left) yield from preorder(tree.right)

so the new syntax would not be useful, unless it was something that provided access to the index item as a variable, like:
yield foo(i) for i in x
which barely saves you anything (a colon, a newline, and an indent).
Not even that, because you can omit the newline and indent: for i in x: yield foo(i) There's a bigger difference between for i in x: yield i and yield from x Moreover, I can imagine optimization opportunities for "yield from" that would not make sense in the context of comprehensions.
participants (4)
-
Andrew Koenig
-
Gustavo J. A. M. Carneiro
-
Michel Pelletier
-
Nick Coghlan