Global Locking

Daniel Parks Dan.Parks at
Mon Aug 5 11:44:27 EDT 2002

On Mon, 2002-08-05 at 18:52, Michael Chermside wrote:
>  > [Dan expresses interest in changing Python to avoid needing
>  >  a global interpreter lock.]
> I don't understand (or, more accurately, I am not sure YOU understand) 
> exactly what you hope to gain from this modification.

Using Linux 2.4.19, low latency, O(1) scheduler.....

I'll do my best to explain what is going on....and what I want.  We have
modified the Linux scheduler to switch SCHED_RR threads *VERY* often. 
We have implemented the switch in two competing ways: 
1. during the timer tick 
2.during every call to schedule 
(We are in the process of determining which one fits our system best).  

We have tested these modifications using a set of C programs.  These C
programs behave exactly as we want.  The next step was to test if Python
behaved the same way.  In C I was doing this:

void thread_task()
	int i;
	set_rr(); /* set my self real time round robin */
	/* This does some work and timestamps to an printing */

int main()
	set_fifo(); /* set my self real time fifo with highest priority 			to
gaurantee that I get to create all the 			threads first 	*/

	pthread_create(&th1,NULL,(void *)thread_task,NULL);
	pthread_create(&th2,NULL,(void *)thread_task,NULL);

So when testing python I did this:

def pythread_task()
	for i in range(BIG):

def myprog():


with python_workblock defined as:

void python_workblock()

I ran the C code through SWIG and ran the tests using this.  The results
were different.  Although for the most part the threads would switch
correctly, there were large periods of times where these threads would
just not switch among themselves well.  Even though the code between the
two implementations was largely the same.  I next made my own wrapper
code just to make sure that SWIG wasn't doing anything that I didn't
want done.  This did not alleviate the problem.  At this stage we
started to run kernel test to determine what was going on.  It turns out
that sticking the current running thread on the back of the run queue
every so often does *NOT* make the Python threads fair since most of the
python threads are not on the run queue.  

Now while the C threads use the same exact threading library
(LinuxThreads implementation of pthread) that Python uses, the
interaction between Python and our scheduler seems to be very
different.  Because Python threads are for the most part not on the run
queue (due to sleeping on a condition variable) it is not easy to rotate
them around with a set of C threads such that they all rotate fairly. 
This is inherent in the way Python does it's work, as only one thread is
ever really on the run queue at a time. If we were to get rid of the big
lock, and implement a bunch of little locks, we could have the threads
sleeping only when they really NEED to be sleeping.  Especially if we
use this program MAINLY on dual/quad CPU's, this could really become an
issue.  I know that the common response to this would be "do it in C
then", but we are using Python to interface with developers working on a
higher level API (of which there will be a very small subset of Python
available to them)....who are expected to know as little about
programming as possible.  

Now I know a lot of people may simply disagree with what we
WANT.....that's ok....I'm not asking you to agree with what the goal
is....instead, I simply want feedback on how to achieve the goal.....or
how what I just explained is inherently flawed.

Bottom line of what I just said.....I want all the python threads on the
run queue unless they are legitimately sleeping (i.e. doing I/O) The
only way I know how to do that is to get rid of the big lock and set up
a bunch of smaller locks.  If there is another way to get them all on
the run queue....or if you think you know of a good way to implement
finer locking....please respond.....



More information about the Python-list mailing list