segmentation fault in scipy?

Robert Kern robert.kern at
Thu May 11 00:43:57 CEST 2006

conor.robinson at wrote:
> I'm running operations large arrays of floats, approx 25,000 x 80.
> Python (scipy) does not seem to come close to using 4GB of wired mem,
> but segments at around a gig. Everything works fine on smaller batches
> of data around 10,000 x 80 and uses a max of ~600mb of mem.  Any Ideas?
>  Is this just too much data for scipy?
> Thanks Conor
> Traceback (most recent call last):
>  File "C:\Temp\CR_2\", line 68, in ?
>    net.rProp(1.2, .5, .000001, 50.0, input, output, 1)
>  File "/Users/conorrob/Desktop/CR_2/", line 230, in rProp
>    print scipy.trace(error*scipy.transpose(error))
>  File "D:\Python24\Lib\site-packages\numpy\core\", line
> 149, in
> __mul__
>    return, other)
> MemoryError

This is not a segfault. Is this the only error you see? Or are you actually
seeing a segfault somewhere?

If error.shape == (25000, 80), then dot(error, transpose(error)) will be
returning an array of shape (25000, 25000). Assuming double precision floats,
that array will take up about 4768 megabytes of memory, more than you have. The
memory usage doesn't go up near 4 gigabytes because the allocation of the very
large returned array fails, so the large chunk of memory never gets allocated.

There are two possibilities:

1. As Travis mentioned, numpy won't create the array because it is still
32-bit-limited due to the Python 2.4 C API. This has been resolved with Python 2.5.

2. The default build of numpy uses plain-old malloc(3) to allocate memory, and
it may be failing to create such large chunks of memory.

Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

More information about the Python-list mailing list