On Fri, Oct 12, 2012 at 9:46 PM, Glyph firstname.lastname@example.org wrote:
There has been a lot written on this list about asynchronous, microthreaded and event-driven I/O in the last couple of days. There's too much for me to try to respond to all at once, but I would very much like to (possibly re-)introduce one very important point into the discussion.
Would everyone interested in this please please please read https://github.com/lvh/async-pep/blob/master/pep-3153.rst several times? Especially this section: https://github.com/lvh/async-pep/blob/master/pep-3153.rst#why-separate-protocols-and-transports. If it is not clear, please ask questions about it and I will try to needle someone qualified into improving the explanation.
I am well aware of that section. But, like the rest of PEP 3153, it is sorely lacking in examples or specifications.
I am bringing this up because I've seen a significant amount of discussion of level-triggering versus edge-triggering. Once you have properly separated out transport logic from application implementation, triggering style is an irrelevant, private implementation detail of the networking layer.
This could mean several things: (a) only the networking layer needs to use both trigger styles, the rest of your code should always use trigger style X (and please let X be edge-triggered :-); (b) only in the networking layer is it important to distinguish carefully between the two, in the rest of the app you can use whatever you like best.
Whether the operating system tells Python "you must call recv() once now" or "you must call recv() until I tell you to stop" should not matter to the application if the application is just getting passed the results of recv() which has already been called. Since not all I/O libraries actually have a recv() to call, you shouldn't have the application have to call it. This is perhaps the central design error of asyncore.
Is this about buffering? Because I think I understand buffering. Filling up a buffer with data as it comes in (until a certain limit) is a good job for level-triggered callbacks. Ditto for draining a buffer. The rest of the app can then talk to the buffer and tell it "give me between X and Y bytes, possibly blocking if you don't have at least X available right now, or "here are N more bytes, please send them out when you can". From the app's position these calls *may* block, so they need to use whatever mechanism (callbacks, Futures, Deferreds, yield, yield-from) to ensure that *if* they block, other tasks can run. But the common case is that they don't actually need to block because there is still data / space in the buffer. (You could also have an exception for write() and make that never-blocking, trusting the app not to overfill the buffer; this seems convenient but it worries me a bit.)
If it needs a name, I suppose I'd call my preferred style "event triggering".
But how does it work? What would typical user code in this style look like?
Also, I would like to remind all participants that microthreading, request/response abstraction (i.e. Deferreds, Futures), generator coroutines and a common API for network I/O are all very different tasks and do not need to be accomplished all at once. If you try to build something that does all of this stuff, you get most of Twisted core plus half of Stackless all at once, which is a bit much for the stdlib to bite off in one chunk.
Well understood. (And I don't even want to get microthreading into the mix, although others may disagree -- I see Christian Tismer has jumped in...) But I also think that if we design these things in isolation it's likely that we'll find later that the pieces don't fit, and I don't want that to happen either. So I think we should consider these separate, but loosely coordinated efforts.