Dan Stromberg drsalists at
Thu Apr 7 01:20:06 EDT 2011

On Wed, Apr 6, 2011 at 9:06 PM, elsa <kerensaelise at> wrote:

> Hi guys,
> I want to try out some pooling of processors, but I'm not sure if it
> is possible to do what I want to do. Basically, I want to have a
> global object, that is updated during the execution of a function, and
> I want to be able to run this function several times on parallel
> processors. The order in which the function runs doesn't matter, and
> the value of the object doesn't matter to the function, but I do want
> the processors to take turns 'nicely' when updating the object, so
> there are no collisions. Here is an extremely simplified and trivial
> example of what I have in mind:
> from multiprocessing import Pool
> import random
> p=Pool(4)
> myDict={}
> def update(value):
>    global myDict
>    index=random.random()
>    myDict[index]+=value
> total=1000
> After, I would also like to be able to use several processors to
> access the global object (but not modify it). Again, order doesn't
> matter:
> p1=Pool(4)
> def getValues(index):
>    global myDict
>    print myDict[index]
> Is there a way to do this

This should give you a synchronized wrapper around an object in shared
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the Python-list mailing list