[Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects fileobject.c,2.91,2.92
Guido van Rossum
Mon, 13 Nov 2000 17:52:46 -0500
> On Mon, Nov 13, 2000 at 05:36:06PM -0500, Guido van Rossum wrote:
> > After a bit of grepping, it seems that HAVE_LARGEFILE_SUPPORT reliably
> > means that the low-level system calls (lseek(), stat() etc.) support
> > large files, through an off_t type that is at least 8 bytes (assumed
> > to be equivalent with a long long in some places, given the use of
> > PyLong_FromLongLong() and PyLong_AsLongLong()).
> > But the problems occur in fileobject.c, where we're dealing with the
> > stdio library. Not all stdio libraries seem to support long files in
> > the same way, and they use a different typedef, fpos_t, which may be
> > larger or smaller in size than off_t.
> This isn't the problem. The problem is that we assume that because off_t is
> 8 bytes, we have_LARGE_FILE_SUPPORT. This isn't true. On BSDI, off_t *is* 8
> bytes, but none of the available fseek/ftell variations take an off_t as
> argument ;P The TELL64() workaround works around that problem, but still
> doesn't enable large file support, because there isn't any such support in
> (Trust me... we've had logfiles and IP-traffic databases truncated because
> of that... 2Gb is the limit, currently, on BSDI.)
Sure, but the #ifdef isn't really about how well the kernel supports
large files -- it is about what code you must use. There's one set of
places that uses off_t, and those are guided just fine by
HAVE_LARGEFILE_SUPPORT -- whether or not you can actually have files
larger than 2Gb!
But there's a *different* set of test that must be used to determine
what to do for the stdio calls. Note that on all platforms so far
where TELL64 was needed (5 in total: MS_WIN64, NetBSD, OpenBSD, BSDI,
and Mac OSX), there were only compilation problems in fileobject.c,
and they went away by defining TELL64 as lseek((fd),0,SEEK_CUR).
What goes wrong if HAVE_LARGEFILE_SUPPORT is defined but the system
doesn't actually support files larger than 2Gb? I suppose you get
some sort of I/O error when you try to write such files. Since they
can't be created, you can't run into trouble when trying to read
Or am I still missing something?
--Guido van Rossum (home page: http://www.python.org/~guido/)