Why is "for line in f" faster than readline()

Alexandre Ferrieux alexandre.ferrieux at gmail.com
Fri Jul 27 20:17:38 CEST 2007

On Jul 27, 2:16 pm, Duncan Booth <duncan.bo... at invalid.invalid> wrote:
> Alexandre Ferrieux <alexandre.ferri... at gmail.com> wrote:
> > Now, *why* is such buffering gaining speed over stdio's fgets(), which
> > already does input buffering (though in a more subtle way, which makes
> > it still usable with pipes etc.) ?
> Because the C runtime library has different constraints than Python's file
> iterator.
> In particular the stdio fgets() must not read ahead (because the stream
> might not be seekable), so it is usually just implemented as a series of
> calls to read one character at a time until it has sufficient characters.

Sorry, but this is simply wrong. Try an strace on a process doing
fgets(), and you'll see that it does
 large read()s. Moreover, the semantics of a blocking read on those
live sources is to block while there are no more data, but to return
whatever's available (possibly fewer bytes than asked) as soon as new
data arrive.

This is the key to libc's performance in both line-by-line or bulk

In fact, the Python file iterator is not faster than fgets() by the
way. It's just faster than the Python-wrapped version of it, readline,
which you said is just a thin layer above fgets(). So instead of
dismissing too quickly ol'good libc, maybe the investigation could
uncover a slight suboptimality in readline...

> Anyone worried about
> speed won't use it anyway, so improving it on specific platforms wouldn't
> really help.

Oh really ? grep, sed, awk, all these stdio-based tools, are pathetic
snails compared to the Python file iterator, of course. But then, why
didn't anyone feel the urge to use this very same trick in C and
provide an "fgets2()", working only on seekable devices, but so much
faster ?


More information about the Python-list mailing list