On Oct 2, 2015, at 14:41, Christoph Groth email@example.com wrote:
Guido van Rossum wrote:
Well, what else did you have in mind? Remember that until we fix the GIL (which won't happen until Python 4) you won't get benefit from async programming unless you are overlapping with I/O, and (due to the limitations of select/poll/etc.) that is pretty much limited to network and IPC (disk I/O in particular cannot be overlapped in this way).
I recently realized that asynchronous programming is also useful for scientific computation: It allows to quite naturally express a complex algorithm that at various stages launches background computation.
A good application of this technique is adaptive numerical integration. See  for an example script that uses asyncio. There, the asynchronous master Python process performs the integration algorithm (that is computationally cheap but has quite complex control flow), while the integrand function is evaluated by worker processes.
Now, this particular example uses concurrent.futures, but one could also use MPI, scoop, or 0mq. This is what made me wonder about the monolithic design of asyncio.
Why would you want asyncio here? You explicitly need parallelism, and independent threads/processes rather than coroutines (with or without an event loop). And it seems like composable futures are exactly the abstraction you want.
It seems like what you really need here is to extend (or make it easier for end users and third party libs to extend) concurrent.futures to work with 0mq, etc., so you can leave the controller code alone while swapping out the dispatch mechanism.