[Baypiggies] reading files quickly and efficiently

Sam Penrose sampenrose at gmail.com
Sun Dec 5 22:37:50 CET 2010


It sounds like the community really came through for the specific case
of biology data. For general cases of processing large files, David
Beazley's generators-for-systems-programming slide deck has some
interesting ideas:

    http://www.dabeaz.com/generators/Generators.pdf

> On Wed, Nov 17, 2010 at 7:43 PM, Ned Deily <nad at acm.org> wrote:
>> In article
>> <AANLkTinTGwaqmRt+1rJWvjY6ooqQPfUDtNV1QqaB_i05 at mail.gmail.com>,
>>  wesley chun <wescpy at gmail.com> wrote:
>>> MODIFIED:
>>> f = open ('nr', 'r')
>>> print sum(1 for line in f)
>>> f.close()
>>>
>>> can anyone else improve on this?
>>
>> with open('nr', 'r') as f:
>>  print(sum(1 for line in f))
>>
>> should work on any Python from 2.6 to 3.2, and 2.5 with
>>  from __future__ import with_statement
>>
>> --
>>  Ned Deily,
>>  nad at acm.org
>>
>> _______________________________________________
>> Baypiggies mailing list
>> Baypiggies at python.org
>> To change your subscription options or unsubscribe:
>> http://mail.python.org/mailman/listinfo/baypiggies
>>
>


More information about the Baypiggies mailing list