[Numpy-discussion] striding through arbitrarily large files

Richard Hattersley rhattersley at gmail.com
Wed Feb 5 15:11:54 EST 2014

On 4 February 2014 15:01, RayS <rays at blue-cove.com> wrote:

>  I was struggling with  methods of reading large disk files into numpy
> efficiently (not FITS or .npy, just raw files of IEEE floats from
> numpy.tostring()). When loading arbitrarily large files it would be nice to
> not bother reading more than the plot can display before zooming in. There
> apparently are no built in methods that allow skipping/striding...

Since you mentioned the plural "files", are your datasets entirely
contained within a single file? If not, you might be interested in Biggus (
https://pypi.python.org/pypi/Biggus). It's a small pure-Python module that
lets you "glue-together" arrays (such as those from smmap) into a single
arbitrarily large virtual array. You can then step over the virtual array
and it maps it back to the underlying sources.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140205/134ed2d4/attachment.html>

More information about the NumPy-Discussion mailing list