[snip]
Devs, looking into this, I noticed that we use PyDataMem_NEW() and PyDataMem_FREE() (which is #defined to malloc() and free()) for handling the data pointer. Why aren't we using the appropriate PyMem_*() functions (or the PyArray_*() memory functions which default to using the PyMem_*() implementations)? Using the PyMem_*() functions lets the Python memory manager have an accurate idea how much memory is being used, which can be important for the large amounts of memory that numpy arrays can consume.
I assume this is intentional design. I just want to know the rationale for it and would like it documented. I can certainly understand if it causes bad interactions with the garbage collector, say (though hiding information from the GC seems like a suboptimal approach).
The macros were created so that the allocator could be switched when we understood better the benefits and trade-offs of using the Python memory manager versus the system memory manager (or one specialized for NumPy). So, the only intentional design was to use the macros (the decision to make them point to malloc and free was more because that's what was being done before than explicit decision. -Travis
-- Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--- Travis Oliphant Enthought, Inc. oliphant@enthought.com 1-512-536-1057 http://www.enthought.com