iostream-like lib?

Max Khesin max at cNOvSisiPonAtecMh.com
Thu May 15 14:40:18 EDT 2003


efficiency is a possible reason. my files may be very large and i would not
like to read a couple of meg (if the line is that long) and then call split
on it just to get the next word.

--
========================================
Max Khesin, software developer -
max at cNvOiSsPiAoMntech.com
[check out our image compression software at www.cvisiontech.com, JBIG2-PDF
compression @
www.cvisiontech.com/cvistapdf.html]


"Anton Muhin" <antonmuhin at sendmail.ru> wrote in message
news:ba0mj3$1vns$1 at news.peterlink.ru...
> Max Khesin wrote:
> > The trouble is that readline() reads more than i have to in the first
place,
> > even before I call split().
> > I did hack it along the lines you suggested with a generator (limiting
> > readline() to a number of bytes and accounting for the last character
being
> > possibly whitespace). I was just wondering if (and why not) there is/is
not
> > direct support for whitespace-delimited input.
>
> I don't know :)
>
> The only thig I want to add: I see no reason why you should limit
> readline with number of bytes? Doesn't the following code (untested) work?
>
> def read_tokens(filename):
>      f = file(filename)
>      for l in f:
>           for token in l.split():
>                yield token
>      close(f)
>
> Best regards,
> anton.
>






More information about the Python-list mailing list