amused at webamused.com
Thu Mar 23 05:02:07 CET 2000
Dana Booth wrote:
> I have a small proggie that reads through the mailspool, looking for lines
> containing keywords. If the mailbox contains mime encoded attachements, or
> for some other reason is very large, it seems to take quite a long time. The
> util uses readline() and the re.search.
> Is readline() not very efficiant? I know that Perl is really the ticket for
> most text processing utils that require anything more than Awk, and in
> this case it's much faster, but the Python util goes on to do much more, and
> I didn't want to call on Perl just for a textfile read.
> Anyway, I'm pretty new to Python, is there a better way to analyze
> textfiles? Or is the re.search slowing it down?
> Dana Booth <dana at mmi.oz.net>
> Tacoma, Wa., USA
> key at pgpkeys.mit.edu:11371
Depending on how big your file is, it might be quicker to suck the
whole thing into memory at once with readlines, e.g.:
for line in myFile.readlines():
If the file is too big for that, you can provide a sizehint to
readlines (e.g. myFile.readlines(1000000)), that will tell Python to
only read about that many bytes at a gulp.
More information about the Python-list