[issue8863] Display Python backtrace on SIGSEGV, SIGFPE and fatal error
report at bugs.python.org
Mon Dec 20 14:30:57 CET 2010
STINNER Victor <victor.stinner at haypocalc.com> added the comment:
Le lundi 20 décembre 2010 07:55:08, vous avez écrit :
> +#define NFAULT_SIGNALS (sizeof(fault_signals) / sizeof(fault_signals))
> +static fault_handler_t fault_handlers;
> , should use "NFAULT_SIGNALS" instead of "4".
Ah yes, NFAULT_SIGNALS is a better choice than the maximum of possible signals
> However, this bit of code bothers me a lot:
> + const int fd = 2; /* should be fileno(stderr) */
> To assume that fd=2 is the write place to be writing bytes is assuming
> quite a bit about the state of the application. It is not unusual at all
> to close 0,1,2 when writing daemons, which frees them up to be assigned to
Write into a closed file descriptor just does nothing. Closed file descriptors
are not a problem.
> For all you know, fd=2 currently is a network socket that you
> will be throwing gibberish at, or worse it could be a block device that
> you are writing gibberish on.
The GNU libc has also a fault handler (source code: debug/segfault.c). It uses
the file descriptor 2, except if SEGFAULT_OUTPUT_NAME environment variable is
set (value stored in "fname" variable): open the specified file.
/* This is the name of the file we are writing to. If none is given
or we cannot write to this file write to stderr. */
fd = 2;
if (fname != NULL)
fd = open (fname, O_TRUNC | O_WRONLY | O_CREAT, 0666);
if (fd == -1)
fd = 2;
The GNU libc installs handlers for SIGSEGV, SIGILL, SIGBUS, SIGSTKFLT,
SIGABBRT and SIGFPE signals. The handler restores the default handler and re-
raise the signal:
/* Pass on the signal (so that a core file is produced). */
sa.sa_handler = SIG_DFL;
sa.sa_flags = 0;
sigaction (signal, &sa, NULL);
Where "raise(signal);" is something like kill(getpid(), signal).
It only uses an alternate stack if SEGFAULT_USE_ALTSTACK environment variable
The handler can display registers, the backtrace and the memory mappings,
depending on the compilation options and the operating system.
> The closest discussion I could find on this subject was on the libstdc++
> mailing-list with regard to their verbose termination code:
> AFAICT, their conclusion was that the only reasonable solution was to write
> to the stderr FILE since it was the only thing that is guaranteed to make
> sense always (even though it may fail). Their situation is different in
> that they are handling a C++ exception, so they don't have to stick to
> async-safe functions, but absent that extra difficulty, I believe the
> reasoning is the same.
The FILE* type cannot be used because fprintf(), fputs(), ... are not signal-
> I'm not sure there is a safe way to know what the fileno for "sys.stderr"
> is because it can be anything, including an object whose fileno changes
> over time
The signal handler cannot access the Python object. Eg. if sys.stderr is a
StringIO() (which has no file descriptor), it cannot be used.
> However, I think it would be fair to support only built-in io
> types that are obviously safe, since you could cache the fileno() value at
> assignment to use in your fault handler.
The problem is to detect that stderr file descriptor changes (eg. closed,
duplicated, reopened, etc.). I don't think that it's possible to detect such
changes (with a portable function).
Python tracker <report at bugs.python.org>
More information about the Python-bugs-list