[Numpy-discussion] Simple shared arrays

Erik Rigtorp erik at rigtorp.com
Fri Dec 31 09:02:14 EST 2010

On Fri, Dec 31, 2010 at 02:13, Paul Ivanov <pivanov314 at gmail.com> wrote:
> Erik Rigtorp, on 2010-12-30 21:30,  wrote:
>> Hi,
>> I was trying to parallelize some algorithms and needed a writable
>> array shared between processes. It turned out to be quite simple and
>> gave a nice speed up almost linear in number of cores. Of course you
>> need to know what you are doing to avoid segfaults and such. But I
>> still think something like this should be included with NumPy for
>> power users.
>> This works by inheriting anonymous mmaped memory. Not sure if this
>> works on windows.
> --snip--
> I've successfully used (what I think is) Sturla Molden's
> shmem_as_ndarray as outline here [1] and here [2] for these
> purposes.

Yeah, i saw that code too. My implementation is even more lax, but
easier to use. It sends arrays by memory reference to subprocesses.
Dangerous: yes, effective: very.

It would be nice if we could stamp out some good effective patterns
using multiprocessing and include them with numpy. The best solution
is probably a parallel_for function:
def parallel_for(func, inherit_args, iterable): ...
Where func should be def func(inherit_args, item): ...
And parallel_for makes sure inherit_args are viewable as a class
shared() with writable shared memory.


More information about the NumPy-Discussion mailing list