(...) But seriously, thanks for repeating the explanation for my benefit.
Glad it was useful. To be fair, I think this is the first time I've actually written the whole thing down. And I didn't even get the whole thing down, I missed the following important bit:
I see your example as a perfect motivation for adding some kind of map() primitive. (...)
You're correct, of course; technically, a map() primitive resolves all the same issues. It's possible to do everything with generator coroutines that it's possible to do with callbacks explicitly; I shouldn't have made the case for sequencing callbacks on the basis that the behavior can't be replicated. And, modulo any of my other suggestions, a "map" primitive is a good idea - Twisted implements such a primitive with 'gatherResults' (although, of course, it works on any Deferred, not just those returned by inlineCallbacks).
The real problem with generator coroutines is that if you make them the primitive, you have an abstraction inversion if you want to have callbacks (which, IMHO, are simply more straightforward in many cases).
By using a generator scheduler, you're still using callbacks to implement the sequencing. At some point, you have to have some code calling x.next(), x.send(...), x.close(), and raising StopIteration(), but they are obscured by syntactic sugar. You still need a low-level callback-scheduling API to integrate with the heart of the event loop.
One area where this abstraction inversion bites you is performance. Now, my experience might be dated here; I haven't measured in a few years, but as nice as generators can be for structuring complex event flows, that abstraction comes with a non-trivial performance cost. Exceptions in Python are much better than they used to be, but in CPython they're still not free. Every return value being replaced with a callback trampoline is bad, but replacing it instead with a generator being advanced, an exception being raised and a callback trampoline is worse.
Of course, maybe inlineCallbacks is just badly implemented, but reviewing the implementation now it looks reasonably minimal.
I don't want to raise the specter of premature optimization here; I'm not claiming that the implementation of the scheduler needs to be squeezed for every ounce of performance before anyone implements anything. But, by building in the requirement for these unnecessary gyrations to support syntax sugar for every request/response event-driven operation, one precludes the possibility of low-level optimizations for performance-sensitive event coordination later.
Now, if a PyPy developer wants to chime in and tell me I'm full of crap, and either now or in the future StopIteration Exceptions will be free, and will actually send your CPU back in time, as well as giving a pet kitten as a present to a unicorn every time you 'raise', I'll probably believe it and happily retire this argument forever. But I doubt it.
I'll also grant that it's possible that I'm just the equivalent of a crotchety old assembler programmer here, claiming that we can't afford these fancy automatic register allocators and indirect function calls and run-time linking because they'll never be fast enough for real programs. But I will note that, rarely as you need it, assembler does still exist at some layer of the C compiler stack, and you can write it yourself if you really want to; nothing will get in your way.
So that's mainly the point I'm trying to make about a Deferred-like abstraction. Aside from matters of taste and performance, you need to implement your generator coroutines in terms of something, and it might as well be something clean and documented that can be used by people who feel they need it. This will also help if some future version of Python modifies something about the way that generators work, similar to the way .send() opened the door for non-ugly coroutines in the first place. Perhaps some optimized version of 'return' with a value? If the coroutine scheduler is firmly in terms of some other eventual-result API (Deferreds, Futures, Promises), then adding support to that scheduler for @yield_coroutine_v2 should be easy; as would adding support for other things I don't like, like tasklets and greenlets ;).
It also handles the input arriving in batches (as they do for App Engine Datastore queries). (...)
... I think I've mentioned <http://tm.tl/1956> already in one of my previous posts. ...
I'm curious as to how this works. If you are getting a Future callback, don't you only get that once? How do you re-sequence all of your generators to run the same step again when more data is available?
In general, whenever you want parallelism in Python, you have to introduce a new function, unless you happen to have a suitable function lying around already;
I'm glad we agree there, at least :).
so I don't feel I am contradicting myself by proposing a mechanism using callbacks here. It's the callbacks for sequencing that I dislike.
Earlier I was talking about implementing event sequencing as callbacks, which you kind of have to do either way. Separately, there's the issue of presenting event sequencing as control flow. While this is definitely useful for high-level applications - at my day job, about half the code I write is decorated with @inlineCallbacks - these high-level applications depend on a huge amount of low-level code (protocol parsers, database bindings, thread pools) being written and exhaustively tested, whose edge cases are much easier to exhaustively flesh out with explicit callbacks. When you need to test a portion of the control flow, there's no need to fool a generator into executing down to a specific branch point; you just pull out the callback to a top-level name rather than a closure and call it directly.
Also, correct usage of generator coroutines depends on a previous understanding of event-driven programming. This is why Twisted core maintainers are not particularly sanguine about inlineCallbacks and generally consider it a power-tool for advanced users rather than an introductory facility to make things easier.
In our collective experience helping people understand both Deferreds and inlineCallbacks, there are different paths to enlightenment.
When learning Deferreds, someone with no previous event-driven experience will initially be disgusted; why does their code have to look like such a mess? Then they'll come to terms with the problem being solved and accept it, but move on to being perplexed: what the heck are these Deferreds doing, anyway? Finally they start to understand what's happening and move on to depending on the reactor to much, and are somewhat baffled by callbacks never being called. Finally they realize they should start testing their code by firing Deferreds synchronously and inspecting results, and everything starts to come together.
Keep in mind, as you read the following, that I probably couldn't do my job as effectively without inlineCallbacks and I am probably its biggest fan on the Twisted team, also :).
When learning with inlineCallbacks, someone with no previous event-driven experience will usually be excited. The 'yield's are weird, but almost exciting - it makes the code feel more advanced somehow, and they sort of understand the concurrency implications, but not really. It's also convenient! They just sprinkle in a 'yield' any time they need to make a call that looks like maybe it'll block sometimes.
Everything works okay for a while, and then (inevitably, it seems) they happen across some ordering bug and just absolutely cannot figure out, which causes state corruption (because they blithely stuck a 'yield' between two things that really needed to be in an effective critical section) or hangs (generators hanging around waiting on un-fired Deferreds so you don't even get the traceback out of GC closing them because something's keeping a reference to them; harder to debug even than "normal" unfired Deferreds because they're not familiar with how to inspect or trace the flow of event execution, since the code looked "normal").
Now, this is easier to back out of than a massive multithreaded (read) mess, because the code does at least have a finite number of visible task-switch points, and it's usually possible to track it down with some help. But the experience is not pleasant, because by this point there are usually 10-deep call-stacks of generator-calling-a-generator-calling-a-generator and, especially in the problematic cases, it's not clear what got started from where.
inlineCallbacks is a great boon to promoting Twisted usage, because some people never make it out of the "everything works okay for a while" phase, and it's much easier to get started. We certainly support it as best we can - optimize it, add debugging information to it - because we want people to have the best experience possible. So it's not like it's unmaintained or anything.
But, without Deferreds to fall back down to in order to break down sequencing into super explicit, individual steps, without any potentially misleading syntactic sugar, I don't know how we'd help these folks.
I have a few thoughts on how our experiences have differed here, since I'm assuming you don't hear these sorts of complaints about NDB.
One is that Twisted users are typically dealing with a truly bewildering diversity of events, whereas NDB is, as you said, mostly a database client. It's not entirely unusual for a Twisted application to be processing events from a serial port, a USB device, some PTYs, a couple of server connections, some timed events, some threads (usually database connections) and some HTTP client connections.
Another is that we only hear from users with problems. Maybe there are millions of successful users of inlineCallbacks who have architected everything from tiny scripts to massive distributed systems without ever needing to say so much as a how-do-you-do to the Twisted mailing list or IRC channel. (Somehow I doubt this is completely accurate but maybe it accounts for some of our perspective.)
Nevertheless I feel like the strategy of backing out a generator into lower-level discrete callback-sequenced operations is a very important tool in the debugging toolbox.
Or maybe map_async()'s Future's result should be a set?
Well really it ought to be a dataflow of some kind so you can enumerate it as it's going :). But I think if the results arrive in some order you ought to be able to see that order in application code, even if you usually don't care.
I don't want to belabor this point, but it bugs me a little bit that we get so much feedback from the broader Python community along the lines of "Why doesn't Twisted do X?
I don't think I quite said that.
Sorry, I didn't mean to say that you did. I raised the point because people who do say things like that tend to cite your opinions that e.g. Monocle is something new and different as reasons why they thought that Twisted didn't do what it did. (I certainly sympathize with the pressure that comes along with everyone scrutinizing every word one says and trying to discover hidden meaning; I'm sure that in a message as long as this one, someone will draw at least five wrong conclusions from me, too.)
But I suspect it happens because Twisted is hard to get into.
Part of it's a marketing issue. Like, if we just converted all of our examples to inlineCallbacks and let people trip over the problems we've seen later on, I'm sure we would get more adoption, and possibly not even a backlash later; people with bugs in their programs tend to think that there's a bug in their programs. They only blame the tools when the programs are hard to write in the first place.
Part of it is a background issue. GUI programmers and people who have worked with multiplayer games instantly recognize what Deferreds are for and are usually up and running within minutes. People primarily with experience with databases and web servers - a pretty big audience, in this day and age - are usually mystified.
But, there are intractable parts of it, too. The Twisted culture is all about extreme reliability and getting a good reputation for systems built using it, and I guess we've made some compromises about expanding our audience in service of that goal.
I suspect anything using higher-order functions this much has that problem; I feel this way about Haskell's Monads.
I've heard several people who do know Haskell say things like "Deferreds are just a trivial linearization of the I/O eigenfunctor over the monadic category of callbacks" and it does worry me. I still think they're relatively straightforward - I invented them in one afternoon when I was about 20 and they have changed relatively little since then - but if they're actually a homomorphism of the lambda calculus over the event manifold as it approaches the monad limit (or whatever: does anyone else feel like Haskell people have great ideas, but they have sworn a solemn vow to only describe them in a language that can only be translated by using undiscovered stone tablets buried on the dark side of the moon?) then I can understand why some users have a hard time.
I wouldn't be surprised if many Twisted lovers are also closet (or not) Haskell lovers.
There are definitely some appealing concepts there. Their 'async' package, for example, does everything in the completely wrong, naive but apparently straightforward way that Java originally did (asynchronous exceptions? communication via shared mutable state? everything's a thread? no event-driven I/O?) but I/O is so limited and the VM is so high tech that it might actually be able to work. I suppose I can best summarize my feelings as <
https://glyph.im/blob/EC0C1BF9-F79E-4F3D-A876-9273933E2E78.jpeg>.
Anyway, back on topic...
Yeah, all we do is dispatch Twisted events from the GUI's loop, usually using the GUI's built-in support for sockets. So your GUI app runs as a normal app. You can, of course, return a Deferred from a function that prompts the user for input, and fire it from a GUI callback, and that'll all work fine: Deferreds don't actually depend on the reactor at all, so you can use them from any callback (they are only in the 'internet' package where the event loop goes for unfortunate historical reasons).