I've come across a memory issue when trying to assign data to slices of a Numpy memory-mapped array. The short story is that if I create a memory mapped array and try to add data to subsets of the array many times in a loop, the memory usage of my code grows over time, suggesting there is some kind of memory leak.
More specifically, if I run the following script:
import random import numpy as np
image = np.memmap('image.np', mode='w+', dtype=np.float32, shape=(10000, 10000))
for i in range(1000):
x = random.uniform(1000, 9000) y = random.uniform(1000, 9000) imin = int(x) - 128 imax = int(x) + 128 jmin = int(y) - 128 jmax = int(y) + 128 data = np.random.random((256,256)) image[imin:imax, jmin:jmax] = image[imin:imax, jmin:jmax] + data
del x, y, imin, imax, jmin, jmax, data
the memory usage goes up to ~300Mb after 1000 iterations (and proportionally more if I increase the number of iterations).
I've written up a more detailed overview of the issue on stackoverflow (with memory profiling):
Does anyone have any idea what is going on, and how I can avoid this issue?