NumPy arrays that use memory allocated from other libraries or tools
Travis Oliphant
oliphant.travis at ieee.org
Thu Sep 11 09:45:27 EDT 2008
sturlamolden wrote:
> On Sep 10, 6:39 am, Travis Oliphant <oliphant.tra... at ieee.org> wrote:
>
>> I wanted to point anybody interested to a blog post that describes a
>> useful pattern for having a NumPy array that points to the memory
>> created by a different memory manager than the standard one used by
>> NumPy.
>
>
> Here is something similar I have found useful:
>
> There will be a new module in the standard library called
> 'multiprocessing' (cf. the pyprocessing package in cheese shop). It
> allows you to crerate multiple processes (as opposed to threads) for
> concurrency on SMPs (cf. the dreaded GIL).
>
> The 'multiprocessing' module let us put ctypes objects in shared
> memory segments (processing.Array and processing.Value). It has it's
> own malloc, so there is no 4k (one page) lower limit on object size.
> Here is how we can make a NumPy ndarray view the shared memory
> referencey be these objects:
>
> try:
> import processing
> except:
> import multiprocessing as processing
>
> import numpy, ctypes
>
> _ctypes_to_numpy = {
> ctypes.c_char : numpy.int8,
> ctypes.c_wchar : numpy.int16,
> ctypes.c_byte : numpy.int8,
> ctypes.c_ubyte : numpy.uint8,
> ctypes.c_short : numpy.int16,
> ctypes.c_ushort : numpy.uint16,
> ctypes.c_int : numpy.int32,
> ctypes.c_uint : numpy.int32,
> ctypes.c_long : numpy.int32,
> ctypes.c_ulong : numpy.int32,
> ctypes.c_float : numpy.float32,
> ctypes.c_double : numpy.float64
> }
>
> def shmem_as_ndarray( array_or_value ):
>
> """ view processing.Array or processing.Value as ndarray """
>
> obj = array_or_value._obj
> buf = obj._wrapper.getView()
> try:
> t = _ctypes_to_numpy[type(obj)]
> return numpy.frombuffer(buf, dtype=t, count=1)
> except KeyError:
> t = _ctypes_to_numpy[obj._type_]
> return numpy.frombuffer(buf, dtype=t)
>
> With this simple tool we can make processes created by multiprocessing
> work with ndarrays that reference the same shared memory segment. I'm
> doing some scalability testing on this. It looks promising :)
>
Hey, that is very neat.
Thanks for pointing me to it. I was not aware of this development in
multiprocessing.
-Travis
More information about the Python-list
mailing list