[Python-Dev] "Fixing" the new GIL

"Martin v. Löwis" martin at v.loewis.de
Tue Mar 16 08:59:16 CET 2010


Cameron Simpson wrote:
> On 15Mar2010 09:28, Martin v. L�wis <martin at v.loewis.de> wrote:
> | > As for the argument that an application with cpu intensive work being
> | > driven by the IO itself will work itself out...  No it won't, it can
> | > get into beat patterns where it is handling requests quite rapidly up
> | > until one that causes a long computation to start comes in.  At that
> | > point it'll stop performing well on other requests for as long (it
> | > could be a significant amount of time) as the cpu intensive request
> | > threads are running.  That is not a graceful degration in serving
> | > capacity / latency as one would normally expect.  It is a sudden drop
> | > off.
> | 
> | Why do you say that? The other threads continue to be served - and
> | Python couldn't use more than one CPU, anyway. Can you demonstrate that
> | in an example?
> 
> Real example:

... unfortunately without a demonstration. What's the throughput under
the old GIL? What's the throughput of this application under the new
GIL? How can I observe the "beat pattern"?

> The idea here is that one has a few threads receiving requests (eg a
> daemon watching a socket or monitoring a db queue table) which then use
> the FuncMultiQueue to manage how many actual requests are processed
> in parallel (yes, a semaphore can cover a lot of this, but not the
> asynchronous call modes).

Why do you say processing is in parallel? In Python, processing is
normally never in parallel, but always sequential (potentially
interleaving). Are you releasing the GIL for processing?

> So, suppose a couple of CPU-intensive callables get queued which work for a
> substantial time, and meanwhile a bunch of tiny tiny cheap requests arrive.
> Their timely response will be impacted by this issue.

By how much exactly? What operating system?

Regards,
Martin


More information about the Python-Dev mailing list