This discussion is pretty interesting to try to list when each architecture is the most efficient, based on the need.

However, just a small precision: multiprocess/multiworker isn't antinomic with AsyncIO: You can have an event loop in each process to try to combine the "best" of two "worlds".
As usual in IT, it isn't a silver bullet that will care the cancer, however, at least to my understanding, it should be useful for some business needs like server daemons.

It isn't a crazy new idea, this design pattern is implemented since a long time ago at least in Nginx: http://www.aosabook.org/en/nginx.html

If you are interested in to use this design pattern to build a HTTP server only, you can use easily aiohttp.web+gunicorn: http://aiohttp.readthedocs.org/en/stable/gunicorn.html
If you want to use any AsyncIO server protocol (aiohttp.web, panoramisk, asyncssh, irc3d), you can use API-Hour: http://www.api-hour.io

And if you want to implement by yourself this design pattern, be my guest, if a Python peon like me has implemented API-Hour, everybody on this mailing-list can do that.

For communication between workers, I use Redis, however, you have plenty of solutions to do that.
As usual, before to select a communication mechanism you should benchmark based on your use cases: some results should surprise you.

Have a nice week.

PS: Thank you everybody for EuroPython, it was amazing ;-)

Ludovic Gasc (GMLudo)

2015-07-26 23:26 GMT+02:00 Sven R. Kunze <srkunze@mail.de>:
Next update:

Improving Performance by Running Independent Tasks Concurrently - A Survey

processes               | threads                    | coroutines             
purpose        | cpu-bound tasks         | cpu- & i/o-bound tasks     | i/o-bound tasks        
               |                         |                            |                        
managed by     | os scheduler            | os scheduler + interpreter |
customizable event loop
controllable   | no                      | no                         | yes                    
               |                         |                            |                        
parallelism    | yes                     | depends (cf. GIL)          | no                     
switching      | at any time             | after any bytecode         | at user-defined points 
shared state   | no                      | yes                        | yes                    
               |                         |                            |                        
startup impact | biggest/medium*         | medium                     | smallest               
cpu impact**   | biggest                 | medium                     | smallest               
memory impact  | biggest                 | medium                     | smallest               
               |                         |                            |                        
pool module    | multiprocessing.Pool    | multiprocessing.dummy.Pool | asyncio.BaseEventLoop  
solo module    | multiprocessing.Process | threading.Thread           | ---                    

biggest - if spawn (fork+exec) and always on Windows
medium - if fork alone

due to context switching

On 26.07.2015 14:18, Paul Moore wrote:
Just as a note - even given the various provisos and "it's not that
simple" comments that have been made, I found this table extremely
useful. Like any such high-level summary, I expect to have to take it
with a pinch of salt, but I don't see that as an issue - anyone who
doesn't fully appreciate that there are subtleties, probably wouldn't
read a longer explanation anyway.

So many thanks for taking the time to put this together (and for
continuing to improve it).
You are welcome. :)
+1 on something like this ending up in the Python docs somewhere.
Not sure how the process for this is but I think the Python gurus will find a way.

Python-ideas mailing list
Code of Conduct: http://python.org/psf/codeofconduct/