At.: use less RAM memory and increase execution speed
Hi experts! I wanna use less RAM memory in my Monte Carlo simulations. In my algorithm I use numpy arrays and xrange() function. I hear that I can reduce RAM used in my lagorithm if I do the next: 1) replace xrange() for range(). 2) replace numpya arrays for python lists 3) use reset() function for deleting useless arrays. Is that true? In adition, I wanna increase execution speed of my code (I use numpy and SciPy functions). How can I apply Cython? Will it help? Please help. Thanks a lot!!
On Thu, Sep 26, 2013 at 7:19 PM, Josè Luis Mietta < joseluismietta@yahoo.com.ar> wrote:
Hi experts!
I wanna use less RAM memory in my Monte Carlo simulations. In my
algorithm I use numpy arrays and xrange() function.
I hear that I can reduce RAM used in my lagorithm if I do the next:
1) replace xrange() for range().
range() will pretty much always use *more* memory than xrange().
2) replace numpya arrays for python lists
It depends. I recommended that to you because you were using np.append() a *lot* on large arrays, and this can cause memory fragmentation issues as these large arrays need to be reallocated every time. A list of float objects of length N will use *more* memory than an equivalent float64 array of length N. I also recommended that you simply preallocate arrays of the right size and fill them in. That would be the ideal solution.
3) use reset() function for deleting useless arrays.
There is no reset() function. You may have heard about the %reset magic command in IPython which clears IPython's interactive namespace. You would not use it in your code.
Is that true?
In adition, I wanna increase execution speed of my code (I use numpy and SciPy functions). How can I apply Cython? Will it help?
There is no way to tell without knowing what is taking all of the execution time in your code. You will need to profile your code to find out. Cython *can* help quite frequently. Sometimes it won't, or at least there may be much easier things you can do to speed up your code. For example, I am willing to bet that fixing your code to avoid np.append() will make your code much faster. -- Robert Kern
xrange should be more memory efficient than range: http://stackoverflow.com/questions/135041/should-you-always-favor-xrange-ove... Replacing arrays with lists is probably a bad idea for a lot of reasons. You'll lose nice vectorization of simple operations, and all of numpy's other benefits. To be more parsimonious with memory, you probably want to pay close attention to array data types and make sure that things aren't being automatically upconverted to higher precision data types. I've never considered using reset. Perhaps you need to take a look at your program's structure and make sure that useless arrays can be garbage collected properly. Preallocation of arrays can give you tons of benefits with regards to array size and program speed. If you aren't using preallocation, now's a great time to start. You can pass numpy arrays into Cython functions, and you can also call numpy/scipy functions within Cython functions. Identify your bottlenecks using some kind of profiling, then work on optimizing those with Cython. HTH, Mike On Thu, Sep 26, 2013 at 11:19 AM, Josè Luis Mietta < joseluismietta@yahoo.com.ar> wrote:
Hi experts!
I wanna use less RAM memory in my Monte Carlo simulations. In my algorithm I use numpy arrays and xrange() function. I hear that I can reduce RAM used in my lagorithm if I do the next:
1) replace xrange() for range(). 2) replace numpya arrays for python lists 3) use reset() function for deleting useless arrays. Is that true?
In adition, I wanna increase execution speed of my code (I use numpy and SciPy functions). How can I apply Cython? Will it help?
Please help.
Thanks a lot!!
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
participants (3)
-
Josè Luis Mietta
-
Michael Sarahan
-
Robert Kern