
Thanks Ludovic. On 28.07.2015 22:15, Ludovic Gasc wrote:
Hello,
This discussion is pretty interesting to try to list when each architecture is the most efficient, based on the need.
However, just a small precision: multiprocess/multiworker isn't antinomic with AsyncIO: You can have an event loop in each process to try to combine the "best" of two "worlds". As usual in IT, it isn't a silver bullet that will care the cancer, however, at least to my understanding, it should be useful for some business needs like server daemons.
I think that should be clear for everybody using any of these modules. But you are right to point it out explicitly.
It isn't a crazy new idea, this design pattern is implemented since a long time ago at least in Nginx: http://www.aosabook.org/en/nginx.html
If you are interested in to use this design pattern to build a HTTP server only, you can use easily aiohttp.web+gunicorn: http://aiohttp.readthedocs.org/en/stable/gunicorn.html If you want to use any AsyncIO server protocol (aiohttp.web, panoramisk, asyncssh, irc3d), you can use API-Hour: http://www.api-hour.io
And if you want to implement by yourself this design pattern, be my guest, if a Python peon like me has implemented API-Hour, everybody on this mailing-list can do that.
For communication between workers, I use Redis, however, you have plenty of solutions to do that. As usual, before to select a communication mechanism you should benchmark based on your use cases: some results should surprise you.
I hope not to disappoint you. I actually strive not to do that manually for each tiny bit of program (assuming there are many place in the code base where a project could benefit from concurrency). Personally, I use benchmarks for optimizing problematic code. But if Python would be able to do that without choosing the right and correctly configured approach (to be determined by benchmarks) that would be awesome. As usual, that needs time to evolve. I found that benchmark resulted improvements do not last forever, unfortunately, and that most of the time nobody is able to keep track of everything. So, as soon as something changes, you need to start anew. That is not acceptable for me. Btw. that is also a reason why a I said recently (another topic on this list), 'if Python could optimize that without my attention that would be great'. The simplest solution and therefore the easiest to comprehend for all team members is the way to go. If that is not efficient enough that is actually a Python issue. Readability counts most. And fortunately, most of the cases that attitude works perfectly with Python. :)
Have a nice week.
PS: Thank you everybody for EuroPython, it was amazing ;-)
-- Ludovic Gasc (GMLudo) http://www.gmludo.eu/
2015-07-26 23:26 GMT+02:00 Sven R. Kunze <srkunze@mail.de <mailto:srkunze@mail.de>>:
Next update:
Improving Performance by Running Independent Tasks Concurrently - A Survey
| processes | threads | coroutines ---------------+-------------------------+----------------------------+------------------------- purpose | cpu-bound tasks | cpu- & i/o-bound tasks | i/o-bound tasks | | | managed by | os scheduler | os scheduler + interpreter | customizable event loop controllable | no | no | yes | | | parallelism | yes | depends (cf. GIL) | no switching | at any time | after any bytecode | at user-defined points shared state | no | yes | yes | | | startup impact | biggest/medium* | medium | smallest cpu impact** | biggest | medium | smallest memory impact | biggest | medium | smallest | | | pool module | multiprocessing.Pool | multiprocessing.dummy.Pool | asyncio.BaseEventLoop solo module | multiprocessing.Process | threading.Thread | ---
* biggest - if spawn (fork+exec) and always on Windows medium - if fork alone
** due to context switching
On 26.07.2015 14:18, Paul Moore wrote:
Just as a note - even given the various provisos and "it's not that simple" comments that have been made, I found this table extremely useful. Like any such high-level summary, I expect to have to take it with a pinch of salt, but I don't see that as an issue - anyone who doesn't fully appreciate that there are subtleties, probably wouldn't read a longer explanation anyway.
So many thanks for taking the time to put this together (and for continuing to improve it).
You are welcome. :)
+1 on something like this ending up in the Python docs somewhere.
Not sure how the process for this is but I think the Python gurus will find a way.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org <mailto:Python-ideas@python.org> https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/