[ python-Bugs-1074011 ] write failure ignored in Py_Finalize()
SourceForge.net
noreply at sourceforge.net
Wed Jan 19 23:28:27 CET 2005
Bugs item #1074011, was opened at 2004-11-27 00:02
Message generated for change (Comment added) made by loewis
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1074011&group_id=5470
Category: Python Interpreter Core
Group: Python 2.3
Status: Open
Resolution: None
Priority: 6
Submitted By: Matthias Klose (doko)
Assigned to: Nobody/Anonymous (nobody)
Summary: write failure ignored in Py_Finalize()
Initial Comment:
[forwarded from http://bugs.debian.org/283108]
Write errors on stdout may be ignored, and hence may
result in loss of valuable user data.
Here's a quick demo:
$ ./close-bug
foo
$ ./close-bug > /dev/full && echo unreported write failure
unreported write failure
$ cat close-bug
#!/usr/bin/python
import sys
def main ():
try:
print 'foo'
sys.stdout.close ()
except IOError, e:
sys.stderr.write ('write failed: %s\n' % e)
sys.exit (1)
if __name__ == '__main__':
main ()
This particular failure comes from the following
unchecked fflush
of stdout in pythonrun.c:
static void
call_ll_exitfuncs(void)
{
while (nexitfuncs > 0)
(*exitfuncs[--nexitfuncs])();
fflush(stdout);
fflush(stderr);
}
Flushing the stream manually, python does raise an
exception.
Please note that simply adding a test for fflush
failure is
not sufficient. If you change the above to do this:
if (fflush(stdout) != 0)
{
...handle error...
}
It will appear to solve the problem.
But here is a counterexample:
import sys
def main ():
try:
print "x" * 4095
print
sys.stdout.close ()
except IOError, e:
sys.stderr.write ('write failed: %s\n' % e)
sys.exit (1)
if __name__ == '__main__':
main ()
If you run the above with stdout redirected to /dev/full,
it will silently succeed (exit 0) in spite of a write
failure.
That's what happens on my debian unstable system.
Instead of just checking the fflush return value,
it should also check ferror:
if (fflush(stdout) != 0 || ferror(stdout))
{
...handle error...
}
----------------------------------------------------------------------
>Comment By: Martin v. Löwis (loewis)
Date: 2005-01-19 23:28
Message:
Logged In: YES
user_id=21627
I don't think the patch is right. If somebody explicitly
invokes sys.stdout.close(), this should have the same effect
as invoking fclose(stdout) in C.
It currently doesn't, but with meyering's patch from
2004-12-02 10:20, it still doesn't, so the patch is incorrect.
It might be better to explicitly invoke fclose() if the file
object has no associated f_close function.
----------------------------------------------------------------------
Comment By: Ben Hutchings (wom-work)
Date: 2004-12-20 00:38
Message:
Logged In: YES
user_id=203860
Tim, these bugs are quite difficult to trigger, but they can
hide any kind of file error and lose arbitrarily large
amounts of data.
Here, the following program will run indefinitely:
full = open('/dev/full', 'w')
while 1:
print >>full, 'x' * 1023
print >>full
It seems to be essential that both the character that fills
the file buffer (here it is 1024 bytes long) and the next
are generated implicitly by print - otherwise the write
error will be detected.
----------------------------------------------------------------------
Comment By: Tim Peters (tim_one)
Date: 2004-12-19 23:24
Message:
Logged In: YES
user_id=31435
Sorry, don't care enough to spend time on it (not a bug I've
had, not one I expect to have, don't care if it never
changes). Suggest not using /dev/full as an output device
<wink>.
----------------------------------------------------------------------
Comment By: Raymond Hettinger (rhettinger)
Date: 2004-12-19 22:47
Message:
Logged In: YES
user_id=80475
Tim, what do you think?
----------------------------------------------------------------------
Comment By: Ben Hutchings (wom-work)
Date: 2004-12-07 01:33
Message:
Logged In: YES
user_id=203860
OK, I can reproduce the remaining problem if I substitute
1023 for 4095. The culprit seems to be the unchecked fputs()
in PyFile_WriteString, which is used for the spaces and
newlines generated by the print statement but not for the
objects. I think that's a separate bug.
----------------------------------------------------------------------
Comment By: Jim Meyering (meyering)
Date: 2004-12-07 00:27
Message:
Logged In: YES
user_id=41497
Even with python-2.4 (built fresh from CVS this morning),
I can still reproduce the problem on a Linux-2.6.9/ext3 system:
/p/p/python-2.4/bin/python write-4096 > /dev/full && echo fail
fail
The size that provokes the failure depends on the I/O block size
of your system, so you might need something as big as 131072
on some other type of system.
----------------------------------------------------------------------
Comment By: Ben Hutchings (wom-work)
Date: 2004-12-07 00:11
Message:
Logged In: YES
user_id=203860
The second sample script works for me (in that it raises the
expected exception) with or without my patch. If the error
status of the file has been set by some other operation
without Python noticing it then surely that's a bug in the
code that calls that other operation.
----------------------------------------------------------------------
Comment By: Jim Meyering (meyering)
Date: 2004-12-02 10:20
Message:
Logged In: YES
user_id=41497
Hi,
I submitted the original report (and text above).
The patch that just calls fflush is not enough, as the second sample script illustrates.
Here's a patch that does a little better
--- Python/sysmodule.c.~2.126.~ 2004-08-12 20:19:17.000000000 +0200
+++ Python/sysmodule.c 2004-12-02 09:59:09.058953816 +0100
@@ -927,6 +927,13 @@ settrace() -- set the global debug traci
)
/* end of sys_doc */ ;
+static int
+_check_and_flush (FILE *stream)
+{
+ int prev_fail = ferror (stream);
+ return fflush (stream) || prev_fail ? EOF : 0;
+}
+
PyObject *
_PySys_Init(void)
{
@@ -941,8 +948,8 @@ _PySys_Init(void)
sysdict = PyModule_GetDict(m);
sysin = PyFile_FromFile(stdin, "<stdin>", "r", NULL);
- sysout = PyFile_FromFile(stdout, "<stdout>", "w", NULL);
- syserr = PyFile_FromFile(stderr, "<stderr>", "w", NULL);
+ sysout = PyFile_FromFile(stdout, "<stdout>", "w", _check_and_flush);
+ syserr = PyFile_FromFile(stderr, "<stderr>", "w", _check_and_flush);
if (PyErr_Occurred())
return NULL;
#ifdef MS_WINDOWS
With that patch, the failing script now evokes a diagnostic
and nonzero exit.
$ ./python write-4096 > /dev/full
write failed: (0, 'Error')
But, as you can see, the diagnostic leaves a lot to be desired.
It should say ``write failes: [Errno 28] No space left on device''.
It'll take a more significant change to propagate errno from the
failing fputs/fwrite/etc. to where it can be used.
Jim at meyering.net
----------------------------------------------------------------------
Comment By: Ben Hutchings (wom-work)
Date: 2004-11-29 11:51
Message:
Logged In: YES
user_id=203860
I can't see how to upload an attachment to this bug, so see
patch 1075147.
----------------------------------------------------------------------
Comment By: Ben Hutchings (wom-work)
Date: 2004-11-29 11:43
Message:
Logged In: YES
user_id=203860
Each Python file object has a pointer to the function to be
called on the C stream when close is called on the file
object. Normally this is fclose, but Python doesn't want
the standard streams closed so their file objects are
created with the function pointer set to NULL, making close
a no-op on the underlying files. I'm going to attach a
patch that changes the function for stdout and stderr to be
fflush, so that the streams stay open but write errors are
detected at the time of an explicit close. I don't see the
relevance of the exitfuncs.
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1074011&group_id=5470
More information about the Python-bugs-list
mailing list