[Tutor] Load Entire File into memory

Amal Thomas amalthomas111 at gmail.com
Mon Nov 4 14:30:29 CET 2013


Yes I have found that after loading to RAM and then reading lines by lines
saves a huge amount of time since my text files are very huge.


On Mon, Nov 4, 2013 at 6:46 PM, Alan Gauld <alan.gauld at btinternet.com>wrote:

> On 04/11/13 13:06, Amal Thomas wrote:
>
>  Present code:
>>
>>
>> *f = open("output.txt")
>> content=f.read().split('\n')
>> f.close()
>>
>
> If your objective is to save time, then you should replace this with
> f.readlines() which will save you reprocesasing the entire file to remove
> the newlines.
>
>  for lines in content:
>> *  <processing>*
>> *content.clear()*
>>
>
> But if you are processing line by line what makes you think that reading
> the entire file into RAM and then reprocessing it is faster than reading it
> line by line?
>
> Have you tried that on aqnother file and measutred any significant
> improvement? There are times when reading into RAM is faster but I'm not
> sure this will be one of them.
>
> for line in f:
>    process line
>
> may be your best bet.
>
>  *f = open("output.txt")
>> content=io.StringIO(f.read())
>> f.close()
>> for lines in content:
>>    <processing>
>> *
>> *content.close()*
>>
>
> --
> Alan G
> Author of the Learn to Program web site
> http://www.alan-g.me.uk/
> http://www.flickr.com/photos/alangauldphotos
>
> _______________________________________________
> Tutor maillist  -  Tutor at python.org
> To unsubscribe or change subscription options:
> https://mail.python.org/mailman/listinfo/tutor
>



-- 



*AMAL THOMASFourth Year Undergraduate Student Department of Biotechnology
IIT KHARAGPUR-721302*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/tutor/attachments/20131104/e5b32258/attachment-0001.html>


More information about the Tutor mailing list