Withdrawn PEP 288 and thoughts on PEP 342
PEP 288 is now withdrawn. The generator exceptions portion is subsumed by PEP 343, and the generator attributes portion never garnered any support. The fate of generator attributes is interesting vís-a-vís PEP 342. The motivation was always related to supporting advanced generator uses such as emulating coroutines and writing generator based data consumer functions. At the time, Guido and everyone else found those use cases to be less than persuasive. Also, people countered that that functionality could be easily simulated with class based iterators, global variables, or passing a mutable argument to a generator. Amazingly, none of those objections seem to be directed toward 342 which somehow seems on the verge of acceptance even without use cases, clear motivation, examples, or a draft implementation. Looking back at the history of 288, generator attributes surfaced only in later drafts. In the earlier drafts, the idea for passing arguments to and from running generators used an argument to next() and a return value for yield. If this sounds familiar, it is because it is not much different from the new PEP 342 proposal. However, generator argument passing via next() was shot down early-on. The insurmountable concept flaw was an off-by-one issue. The very first call to next() does not correspond to a yield statement; instead, it corresponds to the first lines of a generator (those run *before* the first yield). All of the proposed use cases needed to have the data passed in earlier. With the death of that idea, generator attributes were born as a way of being able to pass in data before the first yield was encountered and to receive data after the yield. This was workable and satisfied the use cases. Coroutine simulations such as those in Dr Mertz's articles were easily expressed with generator attributes. As a further benefit, using attributes was a natural approach because that same technique has long been used with classes (so no new syntax was needed and the learning curve was zero). In contrast to PEP 288's low impact approach, PEP 342 changes the implementation of the for-loop, alters the semantics of "continue", introduces new and old-style iterators, and creates a new magic method. Meanwhile, it hasn't promised any advantages over the dead PEP 288 proposals. IOW, I don't follow how 342 got this far, how 342 intends to overcome the off-by-one issue, how it addresses all of the other objections leveled at the now dead PEP 288, and why no one appears concerned about introducing yet another new-style/old-style issue that will live in perpetuity. Raymond Sidenote: generator attributes also failed because generators lacked a sufficiently elegant way to refer to running instances of themselves (there is no self argument so we would need an access function or have a dynamic function attribute accessible only from within a running generator).
At 08:24 PM 6/16/2005 -0400, Raymond Hettinger wrote:
Looking back at the history of 288, generator attributes surfaced only in later drafts. In the earlier drafts, the idea for passing arguments to and from running generators used an argument to next() and a return value for yield. If this sounds familiar, it is because it is not much different from the new PEP 342 proposal. However, generator argument passing via next() was shot down early-on. The insurmountable concept flaw was an off-by-one issue. The very first call to next() does not correspond to a yield statement; instead, it corresponds to the first lines of a generator (those run *before* the first yield). All of the proposed use cases needed to have the data passed in earlier.
Huh? I don't see why this is a problem. PEP 342 says: """When the *initial* call to __next__() receives an argument that is not None, TypeError is raised; this is likely caused by some logic error."""
With the death of that idea, generator attributes were born as a way of being able to pass in data before the first yield was encountered and to receive data after the yield. This was workable and satisfied the use cases. Coroutine simulations such as those in Dr Mertz's articles were easily expressed with generator attributes. As a further benefit, using attributes was a natural approach because that same technique has long been used with classes (so no new syntax was needed and the learning curve was zero).
Ugh. Having actually emulated co-routines using generators, I have to tell you that I don't find generator attributes natural for this at all; returning a value or error (via PEP 343's throw()) from a yield expression as in PEP 342 is just what I've been wanting.
In contrast to PEP 288's low impact approach, PEP 342 changes the implementation of the for-loop, alters the semantics of "continue", introduces new and old-style iterators, and creates a new magic method.
I could definitely go for dropping __next__ and the next() builtin from PEP 342, as they don't do anything extra. I also personally don't care about the new continue feature, so I could do without for-loop alteration too. I'd be perfectly happy passing arguments to next() explicitly; I just want yield expressions.
Meanwhile, it hasn't promised any advantages over the dead PEP 288 proposals.
Reading the comments in PEP 288's revision history, it sounds like the argument was to postpone implementation of next(arg) and yield expressions to a later version of Python, after more community experience with generators. We've had that experience now.
IOW, I don't follow how 342 got this far, how 342 intends to overcome the off-by-one issue,
It explicitly addresses it already.
how it addresses all of the other objections leveled at the now dead PEP 288
Arguments for waiting aren't the same thing as arguments for never doing. I interpret the comments in 288's history as ranging from -0 to +0 on the yield expr/next(arg) issue, and didn't see any -1's except on the generator attribute concept.
and why no one appears concerned about introducing yet another new-style/old-style issue that will live in perpetuity.
I believe it has been brought up before, and I also believe I pointed out once or twice that __next__ wasn't needed. I think Guido even mentioned something to that effect himself, but everybody was busy with PEP 340-inspired ideas at the time. 342 was split off in part to avoid losing the ideas that were in it.
[Phillip]
I could definitely go for dropping __next__ and the next() builtin from PEP 342, as they don't do anything extra. I also personally don't care about the new continue feature, so I could do without for-loop alteration too. I'd be perfectly happy passing arguments to next() explicitly; I just want yield expressions.
That's progress! Please do what you can to get the non-essential changes out of 342.
Meanwhile, it hasn't promised any advantages over the dead PEP 288 proposals.
Reading the comments in PEP 288's revision history, it sounds like the argument was to postpone implementation of next(arg) and yield expressions to a later version of Python, after more community experience with generators. We've had that experience now.
288 was brought out of retirement a few months ago. Guido hated every variation of argument passing and frequently quipped that data passing was trivially accomplished though mutable arguments to a generator, through class based iterators, or via a global variable. I believe all of those comments were made recently and they all apply equally to 342. Raymond
On 6/16/05, Raymond Hettinger
[Phillip]
I could definitely go for dropping __next__ and the next() builtin from PEP 342, as they don't do anything extra. I also personally don't care about the new continue feature, so I could do without for-loop alteration too. I'd be perfectly happy passing arguments to next() explicitly; I just want yield expressions.
That's progress! Please do what you can to get the non-essential changes out of 342.
Here's my current position: instead of g.__next__(arg) I'd like to use g.next(arg). The next() builtin then isn't needed. I do like "continue EXPR" but I have to admit I haven't even tried to come up with examples -- it may be unnecessary. As Phillip says, yield expressions and g.next(EXPR) are the core -- and also incidentally look like they will cause the most implementation nightmares. (If someone wants to start implementing these two now, go right ahead!)
Meanwhile, it hasn't promised any advantages over the dead PEP 288 proposals.
Reading the comments in PEP 288's revision history, it sounds like the argument was to postpone implementation of next(arg) and yield expressions to a later version of Python, after more community experience with generators. We've had that experience now.
288 was brought out of retirement a few months ago. Guido hated every variation of argument passing and frequently quipped that data passing was trivially accomplished though mutable arguments to a generator, through class based iterators, or via a global variable. I believe all of those comments were made recently and they all apply equally to 342.
That was all before I (re-)discovered yield-expressions (in Ruby!), and mostly in response to the most recent version of PEP 288, with its problem of accessing the generator instance. I now strongly feel that g.next(EXPR) and yield-expressions are the way to go. Making g.next(EXPR) an error when this is the *initial* resumption of the frame was also a (minor) breakthrough. Any data needed by the generator at this point can be passed in as an argument to the generator. Someone should really come up with some realistic coroutine examples written using PEP 342 (with or without "continue EXPR"). -- --Guido van Rossum (home page: http://www.python.org/~guido/)
At 08:03 PM 6/16/2005 -0700, Guido van Rossum wrote:
Someone should really come up with some realistic coroutine examples written using PEP 342 (with or without "continue EXPR").
How's this? def echo(sock): while True: try: data = yield nonblocking_read(sock) yield nonblocking_write(sock, data) except ConnectionLost: pass def run_server(sock, handler): while True: connected_socket = yield nonblocking_accept(sock) schedule_coroutine(handler(connected_socket)) schedule_coroutine( run_server( setup_listening_socket("localhost","echo"), echo ) Of course, I'm handwaving a lot here, but this is a much clearer example than anything I tried to pull out of the coroutines I've written for actual production use. That is, I originally started this email with a real routine from a complex multiprocess application doing lots of IPC, and quickly got bogged down in explaining all the details of things like yielding to semaphores and whatnot. But I can give you that example too, if you like. Anyway, the handwaving above is only in explanation of details, not in their implementability. It would be pretty straightforward to use Twisted's callback facilities to trigger next() or throw() calls to resume the coroutine in progress. In fact, schedule_coroutine is probably implementable as something like this in Twisted: def schedule_coroutine(geniter, *arg): def resume(): value = geniter.next(*arg) if value is not None: schedule_coroutine(value) reactor.callLater(0, resume) This assumes, of course, that you only yield between coroutines. A better implementation would need to be more like the events.Task class in peak.events, which can handle yielding to Twisted's "Deferreds" and various other kinds of things that can provide callbacks. But this snippet is enough to show that yield expressions let you write event-driven code without going crazy writing callback functions. And of course, you can do this without yield expressions today, with a suitably magic function, but it doesn't read as well: yield nonblocking_accept(sock); connected_socket = events.resume() This is how I actually do this stuff today. 'events.resume()' is a magic function that uses sys._getframe() to peek at the argument passed to the equivalent of 'next()' on the Task that wraps the generator. events.resume() can also raise an error if the equivalent of 'throw()' was called instead. With yield expressions, the code in those Task methods would just do next(arg) or throw(*sys.exc_info()) on the generator-iterator, and 'events.resume()' and its stack hackery could go away.
At 12:07 AM 6/17/2005 -0400, Phillip J. Eby wrote:
def schedule_coroutine(geniter, *arg): def resume(): value = geniter.next(*arg) if value is not None: schedule_coroutine(value) reactor.callLater(0, resume)
Oops. I just realized that this is missing a way to return a value back to a calling coroutine, and that I also forgot to handle exceptions: def schedule_coroutine(coroutine, stack=(), *args): def resume(): try: if len(args)==3: value = coroutine.throw(*args) else: value = coroutine.next(*args) except: if stack: # send the error back to the "calling" coroutine schedule_coroutine(stack[0], stack[1], *sys.exc_info()) else: # Nothing left in this pseudothread, let the # event loop handle it raise if isinstance(value,types.GeneratorType): # Yielded to a specific coroutine, push the current # one on the stack, and call the new one with no args schedule_coroutine(value, (coroutine,stack)) elif stack: # Yielded a result, pop the stack and send the # value to the caller schedule_coroutine(stack[0], stack[1], value) # else: this pseudothread has ended reactor.callLater(0, resume) There, that's better. Now, if a coroutine yields a coroutine, the yielding coroutine is pushed on a stack. If a coroutine yields a non-coroutine value, the stack is popped and the value returned to the previously-suspended coroutine. If a coroutine raises an exception, the stack is popped and the exception is thrown to the previously-suspended coroutine. This little routine basically replaces a whole bunch of code in peak.events that manages a similar coroutine stack right now, but is complicated by the absence of throw() and next(arg); the generators have to be wrapped by objects that add equivalent functionality, and the whole thing gets a lot more complicated as a result. Note that we could add a version of the above to the standard library without using Twisted. A simple loop class could have a deque of "callbacks to invoke", and the reactor.callLater() could be replaced by appending the 'resume' closure to the deque. A main loop function would then just peel items off the deque and call them, looping until an unhandled exception (such as SystemExit) occurs, or some other way of indicating an exit occurs.
[Phillip]
I also personally don't care about the new continue feature, so I could do without for-loop alteration too.
[Guido]
I do like "continue EXPR" but I have to admit I haven't even tried to come up with examples -- it may be unnecessary. As Phillip says, yield expressions and g.next(EXPR) are the core -- and also incidentally look like they will cause the most implementation nightmares.
Let me go on record as a strong -1 for "continue EXPR". The for-loop is our most basic construct and is easily understood in its present form. The same can be said for "continue" and "break" which have the added advantage of a near zero learning curve for people migrating from other languages. Any urge to complicate these basic statements should be seriously scrutinized and held to high standards of clarity, explainability, obviousness, usefulness, and necessity. IMO, it fails most of those tests. I would not look forward to explaining "continue EXPR" in the tutorial and think it would stand out as an anti-feature. Raymond
[Raymond]
Let me go on record as a strong -1 for "continue EXPR". The for-loop is our most basic construct and is easily understood in its present form. The same can be said for "continue" and "break" which have the added advantage of a near zero learning curve for people migrating from other languages.
Any urge to complicate these basic statements should be seriously scrutinized and held to high standards of clarity, explainability, obviousness, usefulness, and necessity. IMO, it fails most of those tests.
I would not look forward to explaining "continue EXPR" in the tutorial and think it would stand out as an anti-feature.
You sometimes seem to compound a rational argument with too much rhetoric. The correct argument against "continue EXPR" is that there are no use cases yet; if there were a good use case, the explanation would follow easily. The original use case (though not presented in PEP 340) was to serve as the equivalent to "return EXPR" in a Ruby block. In Ruby you have something like this (I probably get the syntax wrong): a.foreach() { |x| ...some code... } This executes the block for each item in a, with x (a formal parameter to the block) set to each consecutive item. In Python we would write it like this of course: for x in a: ...some code... In Ruby, the block is an anonymous procedure (a thunk) and foreach() a method that receives a margic (anonymous) parameter which is the thunk. Inside foreach(), you write "yield EXPR" which calls the block with x set to EXPR. When the block contains a return statement, the return value is delivered to the foreach() method as the return value of yield, which can be assigned like this: VAR = yield EXPR Note that Ruby's yield is just a magic call syntax that calls the thunk! But this means that the thunks can be used for other purposes as well. One common use is to have the block act as a Boolean function that selects items from a list; this way you could write filter() with an inline selection, for example (making this up): a1 = a.filter() { |x| return x > 0 } might set a1 to the list of a's elements that are > 0. (Not saying that this is a built-in array method in Ruby, but I think you could write one.) This particular example doesn't translate well into Python because a for-loop doesn't have a return value. Maybe that would be a future possibility if yield-expressions become accepted (just kidding:-). However, I can see other uses for looping over a sequence using a generator and telling the generator something interesting about each of the sequence's items, e.g. whether they are green, or should be printed, or which dollar value they represent if any (to make up a non-Boolean example). Anyway, "continue EXPR" was born as I was thinking of a way to do this kind of thing in Python, since I didn't want to give up return as a way of breaking out of a loop (or several!) and returning from a function. But I'm the first to admit that the use case is still very much hypothetical -- unlike that for g.next(EXPR) and VAR = yield. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Guido van Rossum wrote:
However, I can see other uses for looping over a sequence using a generator and telling the generator something interesting about each of the sequence's items, e.g. whether they are green, or should be printed, or which dollar value they represent if any (to make up a non-Boolean example).
Anyway, "continue EXPR" was born as I was thinking of a way to do this kind of thing in Python, since I didn't want to give up return as a way of breaking out of a loop (or several!) and returning from a function.
But I'm the first to admit that the use case is still very much hypothetical -- unlike that for g.next(EXPR) and VAR = yield.
My use case for this is a directory tree walking generator that yields all the files including the directories in a depth first manner. If a directory satisfies a condition (determined by the caller) the generator shall not descend into it. Something like: DONOTDESCEND=1 for path in mywalk("/usr/src"): if os.path.isdir(path) and os.path.basename(path) == "CVS": continue DONOTDESCEND # do something with path Of course there are different solutions to this problem with callbacks or filters but i like this one as the most elegant. Joachim
On Fri, 2005-06-17 at 13:53, Joachim Koenig-Baltes wrote: [...]
My use case for this is a directory tree walking generator that yields all the files including the directories in a depth first manner. If a directory satisfies a condition (determined by the caller) the generator shall not descend into it.
Something like:
DONOTDESCEND=1 for path in mywalk("/usr/src"): if os.path.isdir(path) and os.path.basename(path) == "CVS": continue DONOTDESCEND # do something with path
Of course there are different solutions to this problem with callbacks or filters but i like this one as the most elegant.
I have implemented almost exactly this use-case using the standard
Python generators, and shudder at the complexity something like this
would introduce.
For me, the right solution would be to either write your own generator
that "wraps" the other generator and filters it, or just make the
generator with additional (default value) parameters that support the
DONOTDECEND filtering.
FWIW, my usecase is a directory comparison generator that walks two
directorys producing tuples of corresponding files. It optionally will
not decend directories in either tree that do not have a corresponding
directory in the other tree. See;
http://minkirri.apana.org.au/~abo/projects/utils/
--
Donovan Baarda
[Joachim Koenig-Baltes]
My use case for this is a directory tree walking generator that yields all the files including the directories in a depth first manner. If a directory satisfies a condition (determined by the caller) the generator shall not descend into it.
Something like:
DONOTDESCEND=1 for path in mywalk("/usr/src"): if os.path.isdir(path) and os.path.basename(path) == "CVS": continue DONOTDESCEND # do something with path
Of course there are different solutions to this problem with callbacks or filters but i like this one as the most elegant.
[Donovan Baarda]
I have implemented almost exactly this use-case using the standard Python generators, and shudder at the complexity something like this would introduce.
For me, the right solution would be to either write your own generator that "wraps" the other generator and filters it, or just make the generator with additional (default value) parameters that support the DONOTDECEND filtering.
FWIW, my usecase is a directory comparison generator that walks two directorys producing tuples of corresponding files. It optionally will not decend directories in either tree that do not have a corresponding directory in the other tree. See;
Thank both of you for the excellent posts. This is exactly kind of feedback and analysis that will show whether continue EXPR is worth it. Raymond
On Fri, 2005-06-17 at 00:43, Raymond Hettinger wrote:
Let me go on record as a strong -1 for "continue EXPR". The for-loop is our most basic construct and is easily understood in its present form. The same can be said for "continue" and "break" which have the added advantage of a near zero learning curve for people migrating from other languages.
Any urge to complicate these basic statements should be seriously scrutinized and held to high standards of clarity, explainability, obviousness, usefulness, and necessity. IMO, it fails most of those tests.
I would not look forward to explaining "continue EXPR" in the tutorial and think it would stand out as an anti-feature.
I'm sympathetic to this argument. I also find yield expressions jarring. I don't have any better suggestions though. -Barry
At 08:03 PM 6/16/2005 -0700, Guido van Rossum wrote:
I do like "continue EXPR" but I have to admit I haven't even tried to come up with examples -- it may be unnecessary. As Phillip says, yield expressions and g.next(EXPR) are the core -- and also incidentally look like they will cause the most implementation nightmares. (If someone wants to start implementing these two now, go right ahead!)
FYI, I've started work on a patch. I've got argument passing into generators working, and compiling of parenthesized yield expressions, in both the C and Python compilers (although the output of the compiler package isn't tested yet). I haven't implemented no-argument yields yet, either, or unparenthesized yields on the far RHS of an assignment. I do plan to implement throw() as part of the same patch. Much of what remains is expanding the test suite and writing documentation, though. It turns out that making 'next(EXPR)' work is a bit tricky; I was going to use METH_COEXIST and METH_VARARGS, but then it occurred to me that METH_VARARGS adds overhead to normal Python calls to 'next()', so I implemented a separate 'send(EXPR)' method instead, and left 'next()' a no-argument call. Whether this is the way it should really work or not is a PEP discussion, of course, but it does seem to me that making send(ob) and throw(typ,val,tb) separate methods from the iterator protocol is a reasonable thing to do. Anyway, the patch isn't ready yet, but I hope to be able to post something for review before the weekend is out.
At 10:26 PM 6/16/2005 -0400, Raymond Hettinger wrote:
288 was brought out of retirement a few months ago. Guido hated every variation of argument passing and frequently quipped that data passing was trivially accomplished though mutable arguments to a generator, through class based iterators, or via a global variable. I believe all of those comments were made recently and they all apply equally to 342.
Clearly, then, he's since learned the error of his ways. :) More seriously, I would say that data passing is not the same thing as coroutine suspension, and that PEP 340 probably gave Guido a much better look at at least one use case for the latter. In the meantime, I applaud your foresight in having invented significant portions of PEP 343 years ahead of time. Now give Guido back his time machine, please. :) If you hadn't borrowed it to write the earlier PEP, he could have seen for himself that all this would happen, and neatly avoided it by just approving PEP 288 to start with. :)
At 08:24 PM 6/16/2005 -0400, Raymond Hettinger wrote:
As a further benefit, using attributes was a natural approach because that same technique has long been used with classes (so no new syntax was needed and the learning curve was zero).
On Friday 17 Jun 2005 02:53, Phillip J. Eby wrote:
Ugh. Having actually emulated co-routines using generators, I have to tell you that I don't find generator attributes natural for this at all; returning a value or error (via PEP 343's throw()) from a yield expression as in PEP 342 is just what I've been wanting.
We've been essentially emulating co-routines using generators embedded into a class to give us the equivalent of generator attributes. We've found this very natural for system composition. (Essentially it's a CSP type system, though with an aim of ease of use) I've written up my talk from ACCU/Python UK this year, and it's available here: http://www.bbc.co.uk/rd/pubs/whp/whp113.shtml I'll also be talking about it at Europython later this month. At 08:03 PM 6/16/2005 -0700, Guido van Rossum wrote:
Someone should really come up with some realistic coroutine examples written using PEP 342 (with or without "continue EXPR").
On Friday 17 Jun 2005 05:07:22, Phillip J. Eby wrote:
How's this?
def echo(sock): while True: try: data = yield nonblocking_read(sock) yield nonblocking_write(sock, data) ... snip ...
For comparison, our version of this would be: from Axon.Component import component from Kamaelia.SimpleServerComponent import SimpleServer class Echo(component): def mainBody(self): while True: if self.dataReady("inbox"): self.send(data,"outbox") yield1 SimpleServer(protocol=EchoProtocol, port=1501).run() For more interesting pipelines we have: pipeline(TCPClient("127.0.0.1",1500), VorbisDecode(), AOAudioPlaybackAdaptor() ).run() Which works in the same way as a Unix pipeline. I haven't written the "pipegraph" or similar component yet that could allow this: graph(A=SingleServer("0.0.0.0", 1500), B=Echo(), layout = { "A:outbox": "B:inbox", "B:outbox" : "A:inbox" } ) (Still undecided on API for that really, currently the above is a lot more verbose -) By contrast I really can't see how passing attributes in via .next() helps this approach in any way (Not that that's a problem for us :). I CAN see though it helps if you're taking the approach for generator composition if you're using twisted.flow (though I'll defer a good example for that to someone else since although I've been asked for a comparison in the past, I don't think I'm sufficiently twisted to do so!). Michael. -- Michael Sparks, Senior R&D Engineer, Digital Media Group Michael.Sparks@rd.bbc.co.uk, http://kamaelia.sourceforge.net/ British Broadcasting Corporation, Research and Development Kingswood Warren, Surrey KT20 6NP This e-mail may contain personal views which are not the views of the BBC.
Hello, I found your paper very interesting. I have also written a very minimalistic white paper, mostly aimed at the PyGTK community, with a small module for pseudo-threads using python generators: http://www.gnome.org/~gjc/gtasklet/gtasklets.html I don't have time to follow this whole discussion, but I leave it here as another example of python pseudo-threads. I also am very much in favour of having yield receive return values or exceptions, as this would make pseudo-threads much more elegant. And I very much wish python had this builtin or in std library. In conjunction with pseudo-threads, I think a "python main loop" implementation is fundamental. Such main loop with permit the programmer to register callbacks for events, such as timeouts, IO conditions, idle tasks, etc., such as one found glib (gtk+'s underlying library). I already pointed out one such implementation that I use for one of my projects, and it already has unit tests to prove that it works. This is also related to the "deprecate asyncore/asynchat" discussions going on earlier. IMHO, they should really be deprecated, and a pseudo-threads solution could be used instead. Anyway, I'd love to help more in this area, but unfortunately I don't have time for these endless discussions... :P Best regards. On Fri, 2005-06-17 at 10:12 +0100, Michael Sparks wrote:
At 08:24 PM 6/16/2005 -0400, Raymond Hettinger wrote:
As a further benefit, using attributes was a natural approach because that same technique has long been used with classes (so no new syntax was needed and the learning curve was zero).
On Friday 17 Jun 2005 02:53, Phillip J. Eby wrote:
Ugh. Having actually emulated co-routines using generators, I have to tell you that I don't find generator attributes natural for this at all; returning a value or error (via PEP 343's throw()) from a yield expression as in PEP 342 is just what I've been wanting.
We've been essentially emulating co-routines using generators embedded into a class to give us the equivalent of generator attributes. We've found this very natural for system composition. (Essentially it's a CSP type system, though with an aim of ease of use)
I've written up my talk from ACCU/Python UK this year, and it's available here: http://www.bbc.co.uk/rd/pubs/whp/whp113.shtml
I'll also be talking about it at Europython later this month.
At 08:03 PM 6/16/2005 -0700, Guido van Rossum wrote:
Someone should really come up with some realistic coroutine examples written using PEP 342 (with or without "continue EXPR").
On Friday 17 Jun 2005 05:07:22, Phillip J. Eby wrote:
How's this?
def echo(sock): while True: try: data = yield nonblocking_read(sock) yield nonblocking_write(sock, data) ... snip ...
For comparison, our version of this would be:
from Axon.Component import component from Kamaelia.SimpleServerComponent import SimpleServer class Echo(component): def mainBody(self): while True: if self.dataReady("inbox"): self.send(data,"outbox") yield1
SimpleServer(protocol=EchoProtocol, port=1501).run()
For more interesting pipelines we have:
pipeline(TCPClient("127.0.0.1",1500), VorbisDecode(), AOAudioPlaybackAdaptor() ).run()
Which works in the same way as a Unix pipeline. I haven't written the "pipegraph" or similar component yet that could allow this:
graph(A=SingleServer("0.0.0.0", 1500), B=Echo(), layout = { "A:outbox": "B:inbox", "B:outbox" : "A:inbox" } )
(Still undecided on API for that really, currently the above is a lot more verbose -)
By contrast I really can't see how passing attributes in via .next() helps this approach in any way (Not that that's a problem for us :).
I CAN see though it helps if you're taking the approach for generator composition if you're using twisted.flow (though I'll defer a good example for that to someone else since although I've been asked for a comparison in the past, I don't think I'm sufficiently twisted to do so!).
Michael. -- Gustavo J. A. M. Carneiro
The universe is always one step beyond logic.
At 11:29 AM 6/17/2005 +0100, Gustavo J. A. M. Carneiro wrote:
In conjunction with pseudo-threads, I think a "python main loop" implementation is fundamental. Such main loop with permit the programmer to register callbacks for events, such as timeouts, IO conditions, idle tasks, etc., such as one found glib (gtk+'s underlying library). I already pointed out one such implementation that I use for one of my projects, and it already has unit tests to prove that it works.
I think it's important to point out that such a "main loop" needs to be defined as an interface, rather than an implementation, because there are many such "main loops" out there as far as GTK, wx, OS X, etc. that have different implementation details as to how timeouts and I/O have to be managed. Since I see from your web page that you've looked at peak.events, I would refer you to the IEventLoop interface as an example of such an interface; any Twisted reactor can be adapted to provide most of the IEventLoop features (and vice versa) which means an interface like it should be usable on a variety of platforms. Of course, I also think that before proposing an event loop facility for the stdlib, we should actually succeed in implementing next(arg), yield expressions, and throw(). I'll probably take a look at next()/throw() this weekend to see if there's anything I can contribute to the ceval.c part; I'm a bit less likely to be able to help on the compilation side of things, though. (Apart from maybe adding a POP_TOP after existing yield statements.)
participants (9)
-
Barry Warsaw
-
Donovan Baarda
-
Guido van Rossum
-
Gustavo J. A. M. Carneiro
-
Joachim Koenig-Baltes
-
Michael Sparks
-
Phillip J. Eby
-
Raymond Hettinger
-
Raymond Hettinger