[Numpy-discussion] Numpy doesn't use RAM

YueCompl compl.yue at icloud.com
Wed Mar 25 02:18:28 EDT 2020

An alternative solution may be https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html <https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html>

If you are sure your subsequent computation against the array data has enough locality to avoid thrashing, I think numpy.memmap would work for you, i.e. to use an explicit disk file serving as swap.

My env does a lot mmap'ing on disk data files by C++ (after Python read meta data), then wrap as ndarray, that's enough to run out-of-core programs as long as data access patterns fit in physical RAM at any instant, then even scanning the whole dataset is okay along the time axis (in realworld not data).

Memory (address space) fragmentation is a problem, besides OS' `nofile` (number of file handles held open) limitation, if too many small data files involved, we are in switching to a solution with FUSE based fs with virtual large file viewing many small files on remote storage server.


> On 2020-03-25, at 02:35, Stanley Seibert <sseibert at anaconda.com> wrote:
> In addition to what Sebastian said about memory fragmentation and OS limits about memory allocations, I do think it will be hard to work with an array that close to the memory limit in NumPy regardless.  Almost any operation will need to make a temporary array and exceed your memory limit.  You might want to look at Dask Array for a NumPy-like API for working with chunked arrays that can be staged in and out of memory:
> https://docs.dask.org/en/latest/array.html <https://docs.dask.org/en/latest/array.html>
> As a bonus, Dask will also let you make better use of the large number of CPU cores that you likely have in your 1.9 TB RAM system.  :)
> On Tue, Mar 24, 2020 at 1:00 PM Keyvis Damptey <quantkeyvis at gmail.com <mailto:quantkeyvis at gmail.com>> wrote:
> Hi Numpy dev community,
> I'm keyvis, a statistical data scientist.
> I'm currently using numpy in python 3.8.2 64-bit for a clustering problem, on a machine with 1.9 TB RAM. When I try using np.zeros to create a 600,000 by 600,000 matrix of dtype=np.float32 it says
> "Unable to allocate 1.31 TiB for an array with shape (600000, 600000) and data type float32"
> I used psutils to determine how much RAM python thinks it has access to and it return with 1.8 TB approx.
> Is there some way I can fix numpy to create these large arrays?
> Thanks for your time and consideration
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at python.org <mailto:NumPy-Discussion at python.org>
> https://mail.python.org/mailman/listinfo/numpy-discussion <https://mail.python.org/mailman/listinfo/numpy-discussion>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20200325/01b17dc0/attachment-0001.html>

More information about the NumPy-Discussion mailing list