On Tue, May 25, 2010 at 10:16 PM, Robin <robince@gmail.com> wrote:
If the updates are independent and don't have to be done sequentially you can use the multiprocessing.Pool interface which I've found very convenient for this sort of thing.
Ideally if particles[i] is a class instance then random_fork could modify itself in place instad of returning a modified copy of the instance... then you could do something like
def update_particle(self, i): nv = numpy.random.standard_normal((N,)) self.particles[i].random_fork(nv)
p = multiprocessing.Pool(8) p.map(self.update_particle, range(len(self.particles)))
Sorry - just thought it probably doesn't make sense to use map in this case since your processing function isn't returning anything... you can check Pool.apply_async (which returns control and lets stuff continue in the background) and Pool.apply_sync (which is probably what you want). Cheers Robin
this will distribute each update_particle call to a different process using all cores (providing the processing is independent).
I'm not sure if random is multiprocessor safe for use like this so that would need checking but I hope this helps a bit...