Why is "for line in f" faster than readline()

Duncan Booth duncan.booth at invalid.invalid
Fri Jul 27 08:16:21 EDT 2007


Alexandre Ferrieux <alexandre.ferrieux at gmail.com> wrote:

> Now, *why* is such buffering gaining speed over stdio's fgets(), which
> already does input buffering (though in a more subtle way, which makes
> it still usable with pipes etc.) ?
> 

Because the C runtime library has different constraints than Python's file 
iterator.

In particular the stdio fgets() must not read ahead (because the stream 
might not be seekable), so it is usually just implemented as a series of 
calls to read one character at a time until it has sufficient characters. 
That inevitably has a lot more overhead than reading one 8k buffer and 
subsequently splitting it up into lines.

It would probably be possible to do an implementation of fgets which looked 
at the underlying stream and used buffering and seeking when the stream was 
seekable and the cautious approach otherwise, but that isn't what is 
usually done, and the incentive to do it isn't there: fgets() exists and 
works as advertised even if it isn't very efficient. Anyone worried about 
speed won't use it anyway, so improving it on specific platforms wouldn't 
really help.

A lot of the C runtime is like that: it needs to be robust in a very 
general purpose environment, but it doesn't need to be efficient. If you 
are worried about efficiency then you should look elsewhere.



More information about the Python-list mailing list