I am using a particle filter to estimate the trajectory of a camera based on a sequence of images taken by the camera. The code is slow, but I have 8 processors in my desktop machine. I'd like to use them to get results 8 times faster. I've been looking at the following sections of http://docs.python.org/library: "16.6. multiprocessing" and "16.2. threading". I've also read some discussion from 2006 on scipy-user@scipy.org about seeds for random numbers in threads. I don't have any experience with multiprocessing and would appreciate advice. Here is a bit of code that I want to modify: for i in xrange(len(self.particles)): self.particles[i] = self.particles[i].random_fork() Each particle is a class instance that represents a possible camera state (position, orientation, and velocities). particle.random_fork() is a method that moves the position and orientation based on current velocities and then uses numpy.random.standard_normal((N,)) to perturb the velocities. I handle the correlation structure of the noise by matrices that are members of particle, and I do some of the calculations in c++. I would like to do something like: for i in xrange(len(self.particles)): nv = numpy.random.standard_normal((N,)) launch_on_any_available_processor( self.particles[i] = self.particles[i].random_fork(nv) ) wait_for_completions() But I don't see a command like "launch_on_any_available_processor". I would be grateful for any advice. -- Andy Fraser ISR-2 (MS:B244) afraser@lanl.gov Los Alamos National Laboratory 505 665 9448 Los Alamos, NM 87545