[Numpy-discussion] Memory leak in numpy?

Joseph McGlinchy JMcGlinchy at esri.com
Wed Jan 29 14:39:44 EST 2014


Upon further investigation, I do believe it is within the scipy code where there is a leak. I commented out my call to processBinaryImage(), which is all scipy code calls, and my memory usage remains flat with approximately a 1MB variation. Any ideas? Right now I am getting around it by checking to see how far I got through my dataset, but I have to restart the program after each memory crash.


From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Joseph McGlinchy
Sent: Wednesday, January 29, 2014 11:17 AM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Memory leak in numpy?

Perhaps it is an ESRI/Arcpy issue then. I don't see anything that could be doing that, though, as it is very minimal.

From: numpy-discussion-bounces at scipy.org<mailto:numpy-discussion-bounces at scipy.org> [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Benjamin Root
Sent: Wednesday, January 29, 2014 11:10 AM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Memory leak in numpy?

Hmmm, I see no reason why that would eat up memory.  I just tried it out on my own system (numpy 1.6.1, CentOS 6, python 2.7.1), and had no issues, Memory usage stayed flat for the 10 seconds it took to go through the loop.  Note, I am not using ATLAS or BLAS, so maybe the issue lies there? (i don't know if numpy defers the dot-product over to ATLAS or BLAS if they are available)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140129/93147890/attachment.html>


More information about the NumPy-Discussion mailing list