[Numpy-discussion] Memory leak in numpy?

Joseph McGlinchy JMcGlinchy at esri.com
Wed Jan 29 17:13:01 EST 2014


Unfortunately I do not have Linux or much time to invest in researching and learning an alternative to Valgrind :/

My current workaround, which works very well, is to move my scipy part of the script to its own script and then use os.system() to call it with the appropriate arguments.


Thanks everyone for the replies! Is there a proper way to close the thread?


-Joe

-----Original Message-----
From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Julian Taylor
Sent: Wednesday, January 29, 2014 11:53 AM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Memory leak in numpy?

On 29.01.2014 20:44, Nathaniel Smith wrote:
> On Wed, Jan 29, 2014 at 7:39 PM, Joseph McGlinchy <JMcGlinchy at esri.com> wrote:
>> Upon further investigation, I do believe it is within the scipy code 
>> where there is a leak. I commented out my call to 
>> processBinaryImage(), which is all scipy code calls, and my memory 
>> usage remains flat with approximately a 1MB variation. Any ideas?
> 
> I'd suggest continuing along this line, and keep chopping things out 
> until you have a minimal program that still shows the problem -- 
> that'll probably make it much clearer where the problem is actually 
> coming from...
> 
> -n

depending on how long the program runs you can try running it under massif the valgrind memory usage proftool, that should give you a good clue where the source is.

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion at scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion




More information about the NumPy-Discussion mailing list