python reading file memory cost

Thomas Jollans t at jollybox.de
Mon Aug 1 11:29:08 EDT 2011


On 01/08/11 17:05, Tong Zhang wrote:
> Hello, everyone!
> 
>  
> 
> I am trying to read a little big txt file (~1 GB) by python2.7, what I
> want to do is to read these data into a array, meanwhile, I monitor the
> memory cost, I found that it cost more than 6 GB RAM! So I have two
> questions:
> 
> 1: How to estimate memory cost before exec python script?
> 
> 2: How to save RAM while do not increase exec time?

How are you reading the file? If you are using file_object.read(),
.readlines(), or similar, to read the whole file at once: don't. This is
a tremendous waste of memory, and probably slows things down. Usually,
the best approach is to iterate over the file object itself (for line in
file_object: # process line)

Without knowing what you're doing with the data (or, what "data" is
here), we can't really do much to help you. My best guess would be that
you're unnecessarily storing the data multiple times.

Perhaps you can use the csv module? Do you really need to hold all the
data in memory all the time, or can you process the data in the order it
is in the file, never actually holding more than one (or a few) records
in memory? With generators, Python has excellent support for working
with streams of data like this. (and it would save you a lot of RAM)

 - Thomas




More information about the Python-list mailing list