On Thu, Dec 15, 2011 at 16:17, Fabrice Silva <silva@lma.cnrs-mrs.fr> wrote:
How can one arbitrarily assumes that an ndarray owns its data ?
More explicitly, I have some temporary home-made C structure that holds a pointer to an array. I prepare (using Cython) an numpy.ndarray using the PyArray_NewFromDescr function. I can delete my temporary C structure without freeing the memory holding array, but I wish the numpy.ndarray becomes the owner of the data.
How can do I do such thing ?
You can't, really. numpy-owned arrays will be deallocated with numpy's deallocator. This may not be the appropriate deallocator for memory that your library allocated. If at all possible, I recommend using numpy to create the ndarray and pass that pointer to your library. Sometimes the library's API gets in the way of this. Otherwise, copy the data. Devs, looking into this, I noticed that we use PyDataMem_NEW() and PyDataMem_FREE() (which is #defined to malloc() and free()) for handling the data pointer. Why aren't we using the appropriate PyMem_*() functions (or the PyArray_*() memory functions which default to using the PyMem_*() implementations)? Using the PyMem_*() functions lets the Python memory manager have an accurate idea how much memory is being used, which can be important for the large amounts of memory that numpy arrays can consume. I assume this is intentional design. I just want to know the rationale for it and would like it documented. I can certainly understand if it causes bad interactions with the garbage collector, say (though hiding information from the GC seems like a suboptimal approach). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco