[Numpy-discussion] Aggregate memmap
Robert Kern
robert.kern at gmail.com
Thu Apr 22 17:16:50 EDT 2010
On Wed, Apr 21, 2010 at 14:41, Matthew Turk <matthewturk at gmail.com> wrote:
> Hi there,
>
> I've quite a bit of unformatted fortran data that I'd like to use as
> input to a memmap, as sort of a staging area for selection of
> subregions to be loaded into RAM. Unfortunately, what I'm running
> into is that the data was output as a set of "slices" through a 3D
> cube, instead of a single 3D cube -- the end result being that each
> slice also contains a record delimiter. I was wondering if there's a
> way to either specify that every traversal through the least-frequent
> dimension requires an additional offset or to calculate individual
> offsets for each slice myself and then aggregate these into a "super
> memmap."
Is the record delimiter uniform in size? Like always 8 bytes or
something similar? If so, you can make a record dtype that contains
the delimiter and the slice array.
np.dtype([('delimiter', '|V8'), ('slice', np.float32, (N, M))])
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion
mailing list