On Thu, Sep 26, 2013 at 7:19 PM, Josè Luis Mietta < joseluismietta@yahoo.com.ar> wrote:
Hi experts!
I wanna use less RAM memory in my Monte Carlo simulations. In my
algorithm I use numpy arrays and xrange() function.
I hear that I can reduce RAM used in my lagorithm if I do the next:
1) replace xrange() for range().
range() will pretty much always use *more* memory than xrange().
2) replace numpya arrays for python lists
It depends. I recommended that to you because you were using np.append() a *lot* on large arrays, and this can cause memory fragmentation issues as these large arrays need to be reallocated every time. A list of float objects of length N will use *more* memory than an equivalent float64 array of length N. I also recommended that you simply preallocate arrays of the right size and fill them in. That would be the ideal solution.
3) use reset() function for deleting useless arrays.
There is no reset() function. You may have heard about the %reset magic command in IPython which clears IPython's interactive namespace. You would not use it in your code.
Is that true?
In adition, I wanna increase execution speed of my code (I use numpy and SciPy functions). How can I apply Cython? Will it help?
There is no way to tell without knowing what is taking all of the execution time in your code. You will need to profile your code to find out. Cython *can* help quite frequently. Sometimes it won't, or at least there may be much easier things you can do to speed up your code. For example, I am willing to bet that fixing your code to avoid np.append() will make your code much faster. -- Robert Kern