[Numpy-discussion] find machine maximum numpy array size

Tom Holderness tom.holderness at newcastle.ac.uk
Mon Jan 17 07:35:56 EST 2011


How do I find the maximum possible array size for a given data type on a given architecture?
For example if I do the following on a 32-bit Windows machine:

matrix = np.zeros((8873,9400),np.dtype('f8'))

I get, 
Traceback (most recent call last):
  File "<pyshell#115>", line 1, in <module>
    matrix = np.zeros((8873,9400),np.dtype('f8'))

If I reduce the matrix size then it works.
However, if I run the original command on an equivalent 32-bit Linux machine this works fine (presumably some limit of memory allocation in the Windows kernel? I tested increasing the available RAM and it doesn't solve the problem).

Is there a way I can find this limit? When distributing software to users (who all run different architectures) it would be great if we could check this before running the process and catch the error before the user hits "run".

Many thanks in advance,


More information about the NumPy-Discussion mailing list