Devs, looking into this, I noticed that we use PyDataMem_NEW() and
PyDataMem_FREE() (which is #defined to malloc() and free()) for
handling the data pointer. Why aren't we using the appropriate
PyMem_*() functions (or the PyArray_*() memory functions which default
to using the PyMem_*() implementations)? Using the PyMem_*() functions
lets the Python memory manager have an accurate idea how much memory
is being used, which can be important for the large amounts of memory
that numpy arrays can consume.
I assume this is intentional design. I just want to know the rationale
for it and would like it documented. I can certainly understand if it
causes bad interactions with the garbage collector, say (though hiding
information from the GC seems like a suboptimal approach).
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion