[Numpy-discussion] NumPy-Discussion Digest, Vol 88, Issue 1

Julian Taylor jtaylor.debian at googlemail.com
Wed Jan 1 10:56:14 EST 2014


On 01.01.2014 16:50, Amira Chekir wrote:
> On 31.12.2013 14:13, Amira Chekir wrote:
>> > Hello together,
>> >
>> > I try to load a (large)  NIfTI file (DMRI from Human Connectome Project,
>> > about 1 GB) with NiBabel.
>> >
>> > import nibabel as nib
>> > img = nib.load("dmri.nii.gz")
>> > data = img.get_data()
>> >
>> > The program crashes during "img.get_data()" with an "MemoryError"
>> > (having 4 GB of RAM in my machine).
>> >
>> > Any suggestions?
>>
>> are you using a 64 bit operating system?
>> which version of numpy?
>>
>> assuming nibabel uses np.load under the hood you could try it with numpy
>> 1.8 which reduces excess memory usage when loading compressed files.
> 
> Hi,
> Thanks for your answer.
> I use ubuntu 12.04 32 bits and python 2.7
> I upgrade numpy to 1.8, but the error persists
> I think that the problem is in gzip.py :
>  max_read_chunk = 10 * 1024 * 1024 # 10Mb
> What do you think?
> 

On a 32 bit system you can only use 2GB of ram (even if you have 4GB).
A single copy of your data will already exhaust this and this can be
hard to avoid with numpy.
Use an 64 bit operating system with more RAM or somehow try to chunk
your workload into smaller sizes.



More information about the NumPy-Discussion mailing list