Multiprocessing.Array bug / shared numpy array

Felix schlesin at
Thu Oct 8 22:14:15 CEST 2009


The documentation for the Multiprocessing.Array says:

multiprocessing.Array(typecode_or_type, size_or_initializer, *,

If lock is False then access to the returned object will not be
automatically protected by a lock, so it will not necessarily be

In [48]: mp.Array('i',1,lock=False)
AssertionError                            Traceback (most recent call

multiprocessing/__init__.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
    252     '''
    253     from multiprocessing.sharedctypes import Array
--> 254     return Array(typecode_or_type, size_or_initializer,
    256 #

multiprocessing/sharedctypes.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
     85     if lock is None:
     86         lock = RLock()
---> 87     assert hasattr(lock, 'acquire')
     88     return synchronized(obj, lock)


I.e. it looks like lock=false is not actually supported. Or am I
reading this wrong? If not, I can submit a bug report.

I am trying to create a shared, read-only numpy.ndarray between
several processes. After some googling the basic idea is:

sarr = mp.Array('i',1000)
ndarr = scipy.frombuffer(sarr._obj,dtype='int32')

Since it will be read only (after being filled once in a single
process) I don't think I need any locking mechanism. However is this
really true given garbage collection, reference counts and other
implicit things going on?

Or is there a recommended better way to do this?


More information about the Python-list mailing list