Adding a Par construct to Python?
paul at boddie.org.uk
Sun May 17 23:27:55 CEST 2009
On 17 Mai, 14:05, jer... at martinfamily.freeserve.co.uk wrote:
> From a user point of view I think that adding a 'par' construct to
> Python for parallel loops would add a lot of power and simplicity,
> par i in list:
You can do this right now with a small amount of work to make
updatePartition a callable which works in parallel, and without the
need for extra syntax. For example, with the pprocess module, you'd
use boilerplate like this:
queue = pprocess.Queue(limit=ncores)
updatePartition = queue.manage(pprocess.MakeParallel
(See http://www.boddie.org.uk/python/pprocess/tutorial.html#Map for
At this point, you could use a normal "for" loop, and you could then
"sync" for results by reading from the queue. I'm sure it's a similar
story with the multiprocessing/processing module.
> There would be no locking and it would be the programmer's
> responsibility to ensure that the loop was truly parallel and correct.
Yes, that's the idea.
> The intention of this would be to speed up Python execution on multi-
> core platforms. Within a few years we will see 100+ core processors as
> standard and we need to be ready for that.
In what sense are we not ready? Perhaps the abstractions could be
better, but it's definitely possible to run Python code on multiple
cores today and get decent core utilisation.
> There could also be parallel versions of map, filter and reduce
Yes, that's what pprocess.pmap is for, and I imagine that other
solutions offer similar facilities.
> BUT...none of this would be possible with the current implementation
> of Python with its Global Interpreter Lock, which effectively rules
> out true parallel processing.
> What do others think?
That your last statement is false: true parallel processing is
possible today. See the Wiki for a list of solutions:
In addition, Jython and IronPython don't have a global interpreter
lock, so you have the option of using threads with those
More information about the Python-list