[SciPy-User] Strange memory limits
David Baddeley
david_baddeley at yahoo.com.au
Mon Mar 28 19:51:27 EDT 2011
Hi Chris,
what you're probably running into is a problem with allocating a continuous
block of memory / a memory fragmentation issue. Depending on how windows has
scattered the bits of you're program, (and how much you've allocated and
deleted) you might have lots of small chunks of memory allocated throughout your
3GB address space. When python asks for a contiguous block, it finds that there
are none of that size available, despite the fact that the required total amount
of memory is. This doesn't just affect python/numpy - I've had major issues with
this in Matlab as well (If anything, Matlab seems worse). I've generally found
I've been unable to reliably allocate contiguous blocks over ~ 1/4 of the total
memory size. This also gets worse the longer windows (and your program) has been
running.
Compiling as 64 bit might solve your problem, as, with 12 GB of memory, there
will be a larger address space to look for contiguous blocks in, but probably
doesn't address the fundamental issue. I suspect you could probably get away
with having much smaller contiguous blocks (eg have 3 separate arrays for the 3
different cameras) or even a new array for each image.
cheers,
David
________________________________
From: Chris Weisiger <cweisiger at msg.ucsf.edu>
To: SciPy Users List <scipy-user at scipy.org>
Sent: Tue, 29 March, 2011 11:22:03 AM
Subject: [SciPy-User] Strange memory limits
(This is unrelated to my earlier question about 2D data slicing)
We have a 32-bit Windows program that has Python bindings which do most of the
program logic, reserving the C++ side for heavy lifting. This program needs to
reserve buffers of memory to accept incoming image data from our different
cameras -- it waits until it has received an image from all active cameras, then
saves the image to disk, repeat until all images are in. So the Python side uses
numpy to allocate a block of memory, then hands it off to the C++ side where
images are written to it and then later stored. Ordinarily all of our cameras
are operating in sync so the delay between the first and last cameras is small,
so we can keep the memory buffer small. I'm working on a modified data
collection mode where each camera does a lengthy independent sequence, though,
requiring me to either rewrite the data saving system or simply increase the
buffer size.
Increasing the buffer size works just fine until I try to allocate about a
3x735x512x512 array (camera/Z/X/Y) of 16-bit ints, at which point I get a
MemoryError. This is only a bit over 1GB worth of memory (out of 12GB on the
computer), and according to Windows' Task Manager the program was only using
about 100MB before I tried the allocation -- of course, I've no idea how the
Task Manager maps to how much RAM I've actually requested. So that's a bit
strange. I ought to have 4GB worth of space (or at the very least 3GB), which is
more than enough for what I need.
Short of firing up a memory debugger, any suggestions for tracking down big
allocations? Numpy *should* be our only major offender here aside from the C++
portion of the program, which is small enough for me to examine by hand. Would
it be reasonable to expect to see this problem go away if we rebuilt as a 64-bit
program with 64-bit numpy et al?
Thanks for your time.
-Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20110328/8dbe6fe5/attachment.html>
More information about the SciPy-User
mailing list