[Python-Dev] Rehabilitating fgets
Sat, 6 Jan 2001 17:14:44 -0500
> Unfortunately we can't use fgets(), even if it were faster than
> getline(), because it doesn't tell how many characters it read.
Let's think about that a little harder, because it appears to be our only
hope on Windows (the MS fgets isn't optimized like the Perl inner loop, but
it does lock/unlock the stream only at routine entry/exit, and uses a hidden
non-locking (== much faster) variant of getc in the guts -- we've seen that
the "locking" part of MS getc accounts for 17 of 30 seconds in my test
> On files containing null bytes, readline() is supposed to treat
> these like any other character;
fgets does too (at least it does on Windows, and I believe that's std
behavior). The problem is that it also makes up a null byte on its own.
> If your input is "abc\0def\nxyz\n", the first readline() call
> should return "abc\0def\n".
> But with fgets(), you're left to look in the returned buffer for
> a null byte,
Also yes. But suppose I search "from the right", and ensure the buffer is
free of null bytes before the fgets. For your input file above, fgets
overwrites the initial 9 bytes of the buffer (assuming the buffer is at
least 9 bytes long ...) with
and there's no problem if I search from the right.
> and there's no way (in general) to distinguish this result from
> an input file that only consisted of the three characters "abc".
As above, I'm not convinced of that. The input file "abc" would overwrite
the first four bytes of the buffer with
and leave the tail end alone (well, the MS fgets leaves the tail alone,
although I'm not sure ANSI C guarantees that).
Of course I've *read* any number of Unix(tm) FAQs that also claim it's
impossible, but I never believed them either <wink>.
This extra buffer fiddling is surely an expense I don't want to pay, but the
timing evidence on Windows so far says that I can probably search and/or
copy the whole buffer 100 times and still be faster than enduring the
Am I missing something obvious?