Multiprocessing, shared memory vs. pickled copies
sturlamolden
sturlamolden at yahoo.no
Thu Apr 7 19:39:27 EDT 2011
On 5 apr, 02:05, Robert Kern <robert.k... at gmail.com> wrote:
> PicklingError: Can't pickle <class
> 'multiprocessing.sharedctypes.c_double_Array_10'>: attribute lookup
> multiprocessing.sharedctypes.c_double_Array_10 failed
Hehe :D
That is why programmers should not mess with code they don't
understand!
Gaƫl and I wrote shmem to avoid multiprocessing.sharedctypes, because
they cannot be pickled (they are shared by handle inheritance)! To do
this we used raw Windows API and Unix System V IPC instead of
multiprocessing.Array, and the buffer is pickled by giving it a name
in the file system. Please be informed that the code on bitbucked has
been "fixed" by someone who don't understand my code. "If it ain't
broke don't fix it."
http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip
Known issues/bugs: 64-bit support is lacking, and os._exit in
multiprocessing causes a memory leak on Linux.
> Maybe. If the __reduce_ex__() method is implemented properly (and
> multiprocessing bugs aren't getting in the way), you ought to be able to pass
> them to a Pool just fine. You just need to make sure that the shared arrays are
> allocated before the Pool is started. And this only works on UNIX machines. The
> shared memory objects that shmarray uses can only be inherited. I believe that's
> what Sturla was getting at.
It's a C extension that gives a buffer to NumPy. Then YOU changed how
NumPy pickles arrays referencing these buffers, using
pickle.copy_reg :)
Sturla
More information about the Python-list
mailing list