[Numpy-discussion] Catching out-of-memory error before it happens

Chris Barker chris.barker at noaa.gov
Fri Jan 24 17:29:19 EST 2014


On Fri, Jan 24, 2014 at 8:25 AM, Nathaniel Smith <njs at pobox.com> wrote:

> If your arrays are big enough that you're worried that making a stray copy
> will ENOMEM, then you *shouldn't* have to worry about fragmentation -
> malloc will give each array its own virtual mapping, which can be backed by
> discontinuous physical memory. (I guess it's possible windows has a somehow
> shoddy VM system and this isn't true, but that seems unlikely these days?)
>
All I know is that when I push the limits with memory on a 32 bit Windows
system, it often crashed out when I've never seen more than about 1GB
of memory use by the application -- I would have thought that would
be plenty of overhead.

I also know that I've reached limits onWindows32 well before OS_X 32, but
that may be because IIUC, Windows32 only allows 2GB per process, whereas
OS-X32 allows 4GB per process.

Memory fragmentation is more a problem if you're allocating lots of small
> objects of varying sizes.
>
It could be that's what I've been doing....

On 32 bit, virtual address fragmentation could also be a problem, but if
> you're working with giant data sets then you need 64 bits anyway :-).
>
well, "giant" is defined relative to the system capabilities... but yes, if
you're  pushing the limits of a 32 bit system , the easiest thing to do is
go to 64bits and some more memory!

-CHB

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140124/4ae015bd/attachment.html>


More information about the NumPy-Discussion mailing list