Fast file data retrieval?

Arnaud Delobelle arnodel at gmail.com
Mon Mar 12 21:31:26 CET 2012


On 12 March 2012 19:39, Virgil Stokes <vs at it.uu.se> wrote:
> I have a rather large ASCII file that is structured as follows
>
> header line
> 9 nonblank lines with alphanumeric data
> header line
> 9 nonblank lines with alphanumeric data
> ...
> ...
> ...
> header line
> 9 nonblank lines with alphanumeric data
> EOF
>
> where, a data set contains 10 lines (header + 9 nonblank) and there can be
> several thousand
> data sets in a single file. In addition, each header has a unique ID code.
>
> Is there a fast method for the retrieval of a data set from this large file
> given its ID code?

It depends.  I guess if it's a long running application, you could
load up all the data into a dictionary at startup time (several
thousand data sets doesn't sound like that much).  Another option
would be to put all this into a dbm database file
(http://docs.python.org/library/dbm.html) - it would be very easy to
do.

Or you could have your own custom solution where you scan the file and
build a dictionary mapping keys to file offsets, then when requesting
a dataset you can seek directly to the correct position.
-- 
Arnaud



More information about the Python-list mailing list