Loading large NIfTI file -> MemoryError
Hello together, I try to load a (large) NIfTI file (DMRI from Human Connectome Project, about 1 GB) with NiBabel. import nibabel as nib img = nib.load("dmri.nii.gz") data = img.get_data() The program crashes during "img.get_data()" with an "MemoryError" (having 4 GB of RAM in my machine). Any suggestions? Best regards, AMIRA
On 31.12.2013 14:13, Amira Chekir wrote:
Hello together,
I try to load a (large) NIfTI file (DMRI from Human Connectome Project, about 1 GB) with NiBabel.
import nibabel as nib img = nib.load("dmri.nii.gz") data = img.get_data()
The program crashes during "img.get_data()" with an "MemoryError" (having 4 GB of RAM in my machine).
Any suggestions?
are you using a 64 bit operating system? which version of numpy? assuming nibabel uses np.load under the hood you could try it with numpy 1.8 which reduces excess memory usage when loading compressed files.
Hi, On Tue, Dec 31, 2013 at 1:29 PM, Julian Taylor <jtaylor.debian@googlemail.com> wrote:
On 31.12.2013 14:13, Amira Chekir wrote:
Hello together,
I try to load a (large) NIfTI file (DMRI from Human Connectome Project, about 1 GB) with NiBabel.
import nibabel as nib img = nib.load("dmri.nii.gz") data = img.get_data()
The program crashes during "img.get_data()" with an "MemoryError" (having 4 GB of RAM in my machine).
Any suggestions?
are you using a 64 bit operating system? which version of numpy?
I think you want the nipy-devel mailing list for this question : http://nipy.org/nibabel/ I'm guessing that the reader is loading the raw data which is - say - int16 - and then multiplying by the scale factors to make a float64 image, which is 4 times larger. We're working on an iterative load API at the moment that might help loading the image slice by slice : https://github.com/nipy/nibabel/pull/211 It should be merged in a week or so - but it would be very helpful if you would try out the proposal to see if it helps, Best, Matthew
participants (3)
-
Amira Chekir
-
Julian Taylor
-
Matthew Brett