Parallelization on muli-CPU hardware?

Andreas Kostyrka andreas at kostyrka.org
Tue Oct 5 23:36:16 CEST 2004


On Tue, Oct 05, 2004 at 12:48:14PM -0400, Aahz wrote:
> In article <fmy8d.32815$Z14.12044 at news.indigo.ie>,
> Alan Kennedy  <alanmk at hotmail.com> wrote:
> >
> >I agree that it could potentially be a serious hindrance for cpython if 
> >"multiple core" CPUs become commonplace. This is in contrast to jython 
> >and ironpython, both of which support multiple-cpu parallelism.
> >
> >Although I completely accept the usual arguments offered in defense of 
> >the GIL, i.e. that it isn't a problem in the great majority of use 
> >cases, I think that position will become more difficult to defend as 
> >desktop CPUs sprout more and more execution pipelines.
> 
> Perhaps.  Then again, those pipelines will probably have their work cut
> out running firewalls, spam filters, voice recognition, and so on.  I
> doubt the GIL will make much difference, still.
Actually I doubt that loosing the GIL would make that much performance
difference even on a real (not HT) 4-way box ->
What the GIL buys us is no lock contention and no locking overhead
with Python.

And before somebody cries out, just think how the following trivial
statement would happen.

a = b + c
Lock a
Lock b
Lock c
a = b + c
Unlock c
Unlock b
Unlock a

So basically you either get a really huge number of locks (one per
object) with enough potential for conflicts, deadlocks and all the other
stuff to make it real slow down the ceval. 

One could use less granularity, and lock say the class of the object
involved, but that wouldn't help that much either.

So basically the GIL is a design decision that makes sense, perhaps it
shouldn't be just called the GIL, call it the "very large locking
granularity design decision".

And before somebody points out that other languages can use locks too.
Well, other languages have usually much lower level execution model than
Python. And they usually force the developer to deal with
synchronization primitives. Python OTOH had always the "no segmentation
fault" policy -> so locking would have be "safe" as in "not producing
segfaults". That's not trivial to implement, for example reference
counting isn't trivially implementable without locking (at least
portably).

So, IMHO, there are basically the following design decisions:
GIL: large granularity
MSL: (many small locks) would slow down the overall execution of Python
     programs.
MSLu: (many small locks, unsafe) inacceptable because it would change
      Python experience ;)

Andreas
> -- 
> Aahz (aahz at pythoncraft.com)           <*>         http://www.pythoncraft.com/
> 
> WiFi is the SCSI of the 21st Century -- there are fundamental technical
> reasons for sacrificing a goat.  (with no apologies to John Woods)
> -- 
> http://mail.python.org/mailman/listinfo/python-list



More information about the Python-list mailing list