On Tue, Dec 31, 2013 at 1:29 PM, Julian Taylor email@example.com wrote:
On 31.12.2013 14:13, Amira Chekir wrote:
I try to load a (large) NIfTI file (DMRI from Human Connectome Project, about 1 GB) with NiBabel.
import nibabel as nib img = nib.load("dmri.nii.gz") data = img.get_data()
The program crashes during "img.get_data()" with an "MemoryError" (having 4 GB of RAM in my machine).
are you using a 64 bit operating system? which version of numpy?
I think you want the nipy-devel mailing list for this question :
I'm guessing that the reader is loading the raw data which is - say - int16 - and then multiplying by the scale factors to make a float64 image, which is 4 times larger.
We're working on an iterative load API at the moment that might help loading the image slice by slice :
It should be merged in a week or so - but it would be very helpful if you would try out the proposal to see if it helps,