Python threading (was: Re: global interpreter lock not working as it should)

Jonathan Hogg jonathan at onegoodidea.com
Sun Aug 4 11:30:44 CEST 2002


On 3/8/2002 21:00, in article
ddc19db7.0208031200.5823a2e at posting.google.com, "Armin Steinhoff"
<a-steinhoff at web.de> wrote:

> Jonathan Hogg <jonathan at onegoodidea.com> wrote in message
> news:<B97159DB.F0A9%jonathan at onegoodidea.com>...
[...]
> 
>> The interpreter in Python is
>> effectively a critical region of code. Sections of the interpreter that
>> might block are placed outside of the region so that other threads can enter
>> it while that thread is blocked. Similarly, every thread is forced
>> periodically to leave the region and re-enter it in order to allow the
>> thread scheduler to re-schedule as necessary (the piece you posted).
> 
> IMHO ... there is nothing like a 'thread-scheduler' in the python
> code.
> All python threads are scheduled by the OS ... that means there is no
> code in the python program which can _force_ periodically a thread to
> leave this critical section ( e.g. in the middle of the execution of
> the 10 byte codes).

Of course there isn't a thread scheduler in Python. I never suggested any
such thing (in fact I went to some effort to point out there isn't). I'm
also not sure that you understand critical regions. A critical region is a
section of code protected by a mutex lock. The bytecode interpreter in
Python is a section of code protected by the GIL. Periodically, each thread
is forced to release the GIL and re-obtain it. Hence the thread is forced to
leave the critical region and re-enter it.

>> With FIFO or RR scheduling, and assuming no blocking I/O, all threads will
>> be available to run when the GIL is released. If the thread previously
>> running before the GIL was released still has timeslice left it will be
>> allowed to continue. If it has run out of timeslice then the next thread in
>> line will be switched to. It will acquire the GIL and begin executing.
> 
> This happens only if the thread exhausting its timeslice before it
> acquires again the GIL.

Errr... look at that again:

"If the thread previously running before the GIL was released still has
timeslice left it will be allowed to continue. If it has run out of
timeslice then the next thread in line will be switched to."

Did you read my post?

> In all other cases the thread will be supended after exhausting its
> timeslice (RR sched) and will own the GIL further .. other tasks will
> be started and immediatly supended when they are trying to get the
> GIL.
> Have in mind that _ALL_ threads are running at the same
> (ficed)priority level.
> 
>> There is nothing wrong with the code in ceval.c in this regard.
> 
> Releasing the GIL can't create a context switch if all threads running
> at the same priority ... that's the problem.

I'm sorry, but that's just plain rubbish. If all the threads have the same
priority then the thread scheduler will schedule them according to
timeslice. I'm really not sure where you get the idea that a thread can only
be pre-empted by a higher priority thread. That's just not true.

>> Priority inversion is actually extremely unlikely in Python because the main
>> shared resources is the GIL. There is only one of them so all threads that
>> are not waiting on I/O require it. Therefore a medium priority thread will
>> be unable to pre-empt the lower priority thread (or more accurately it will
>> pre-empt the lower priority thread, immediately attempt to obtain the GIL,
>> and block allowing the lower-priority thread which holds the GIL to
>> continue) until the GIL is released, at which point the highest priority
>> thread will be scheduled.
> 
> Priority inversion happens if a lower priority thread is blocking a
> higher priority thread ... IMHO.

The priority inversion you're thinking of is "bounded" priority inversion.
The high priority thread is blocked waiting on the GIL, but it is known that
the GIL will be released at some point in the near future. If you're
attempting soft realtime in Python you have to accept this. There is no
alternative. The only thing you can do is to keep the syscheckinterval low
and ensure your code does not run unbounded bytecodes (e.g., freeing very
large datastructures, attempting a huge long-integer calculation).

The classic, and most problematic, kind of priority inversion is "unbounded"
inversion caused by a medium priority thread pre-empting a lower priority
thread that is holding the GIL. This is only likely to happen if you have a
C extension running some kind of long running calculation without requiring
the GIL. If you're programming in standard Python then this won't occur.

>> But that's pretty much what everyone was trying to say all along. Python's
>> threads are just like any other thread on the system and rely on the native
>> thread scheduler to do what it thinks is best. Because of this, there is
>> largely nothing that Python can (or indeed should) do to affect this
>> scheduling.
> 
> Yes ... and that's the reason why different scheduling strategies have
> influence on the scheduling of 'python threads'.
> 
> Ok, I will do some tests with the System Analyse Tool (SAT) of QNX6 PE
> ... just to see what happens in detail when I use the different
> scheduling strategies.

Yes, please do. I've already posted examples showing four different
Operating Systems successfully pre-empting and scheduling CPU-bound Python
threads - in most cases thousands of times a second.

I'm still waiting to see anyone demonstrate a real problem with Python
threading.

Jonathan




More information about the Python-list mailing list