[Numpy-discussion] Allocating discontiguous arrays

Albert Strasheim fullung at gmail.com
Wed Jul 19 08:22:15 EDT 2006


Hello all

In some situations, I have to work with very large matrices. My Windows
machine has 3 GB RAM, so I would expect to be able to use most of my
process's address space for my matrix.

Unfortunately, with matrices much larger than 700 or 800 MB, one starts
running into heap fragmentation problems: even though there's 2 GB available
to your process, it isn't available in one contiguous block.

To see this, you can try the following code which tries to allocate a ~1792
MB 2-d array or a list of 1-d arrays that add up to the same size:

import numpy as N
fdtype = N.dtype('<f8')
bufsize = 1792*1024*1024
n = bufsize / fdtype.itemsize
m = int(N.sqrt(n))
if 0: # this doesn't work on Windows
    x = N.zeros((m,m), dtype=fdtype)
else:
    x = [N.zeros(m,) for i in range(m)]
print len(x)
import time
time.sleep(100000)

How does one go about allocating a discontiguous array so that I can work
around this problem?

Thanks!

Regards,

Albert





More information about the NumPy-Discussion mailing list