Questions about GIL and web services from a n00b
cournape at gmail.com
Sat Apr 16 04:59:57 CEST 2011
On Sat, Apr 16, 2011 at 10:05 AM, Raymond Hettinger <python at rcn.com> wrote:
>> > Is the limiting factor CPU?
>> > If it isn't (i.e. you're blocking on IO to/from a web service) then the
>> > GIL won't get in your way.
>> > If it is, then run as many parallel *processes* as you have cores/CPUs
>> > (assuming you're designing an application that can have multiple
>> > instances running in parallel so that you can run over multiple servers
>> > anyway).
>> Great question. At this point, there isn't a limiting factor, but yes
>> the concern is around CPU in the future with lots of threads handling
>> many simultaneous transactions.
> In the Python world, the usual solution to high transaction loads is
> to use event-driven processing (using an async library such as
> Twisted) rather than using multi-threading which doesn't scale well in
> any language.
My experience is that if you are CPU bound, asynchronous programming
in python can be more a curse than a blessing, mostly because the
need to insert "scheduling points" at the right points to avoid
blocking and because profiling becomes that much harder in something
It depends of course of the application, but designing from the ground
up with the idea of running multiple processes is what seems to be the
most natural way of scaling - this does not prevent using async in
each process. This has its own issues, though (e.g. in terms of
administration and monitoring).
Chris, the tornado documention mentions a simple way to get multiple
processes on one box: http://www.tornadoweb.org/documentation (section
mentiong nginx for load balancing). The principle is quite common and
is applicable to most frameworks (the solution is not specific to
More information about the Python-list