[Python-ideas] Responsive signal handling

Cameron Simpson cs at zip.com.au
Mon Jun 22 09:52:07 CEST 2015

On 21Jun2015 19:16, Devin Jeanpierre <jeanpierreda at gmail.com> wrote:
>On the topic of obscure concurrency trivia, signal handling in Python
>is not very friendly, and I'm wondering if anyone is interested in
>reviewing changes to fix it.
>The world today: handlers only run in the main thread, but if the main
>thread is running a bytecode (e.g. a call to a C function), it will
>wait for that first. For example, signal handlers don't get run if you
>are in the middle of a lock acquisition, thread join, or (sometimes) a
>select call, until after the call returns (which may take a very long
>This makes it difficult to be responsive to signals without
>acrobatics, or without using a library that does those acrobatics for
>you (such as Twisted.) Being responsive to SIGTERM and SIGINT is, IMO,
>important for running programs in the cloud, since otherwise they may
>be forcefully killed by the job manager, causing user-facing errors.
>(It's also annoying as a command line user when you can't kill a
>process with anything less than SIGKILL.)

I agree with all of this, but I do think that handling signals in the main 
program by default is a sensible default: it gives very predictable behaviour.

>- Keep only running signal handlers in the main thread, but allow them
>to run even in the middle of a call to a C function for as many C
>functions as we can.

This feels fragile: this means that former one could expect C calls to be 
"atomic" from the main thread's point of view and conversely the C functions 
can expect the main thread (or whatever calling thread called them) is paused 
during their execution. As soon as the calling thread can reactivate these 
guarrentees are broken. Supposing the C call is doing things to thread local 
Python variables, for just one scenario.

So I'm -1 on this on the face of it.

>This is not possible in general, but it can be made to work for all
>blocking operations in the stdlib.

Hmm. I'm not sure that you will find this universally so. No, I have no 
examples proving my intuition here.

>Operations that run in C but just
>take a long time, or that are part of third-party code, will continue
>to inhibit responsiveness.
>- Run signal handlers in a dedicated separate thread.
>IMO this is generally better than running signal handlers in the main
>thread, because it eliminates the separate concept of "async-safe" and
>just requires "thread-safe". So you can use regular threading
>synchronization primitives for safety, instead of relying on luck /
>memorized lists of atomic/re-entrant operations.

Yes, I am in favour of this or something like it. Personally I would go for 
either or both of:

  - a stdlib function to specify the thread to handle signals instead of main

  - a stdlib function to declare that signals should immediately place a nice descriptive "signal" object on a Queue, and leaves it to the user to handle the queue (for example, by spawning a thread to consume it)

>Something still needs to run in the main thread though, for e.g.
>KeyboardInterrupt, so this is not super straightforward.

Is this necessarily true?

>Also, it
>could break any code that really relies on signal handlers running in
>the main thread.

Which is why it should never be the default; I am firmly of the opinion that 
that changed handling should be requested by the program.

Cameron Simpson <cs at zip.com.au>

Facts do not discourage the conspiracy-minded.
        - Robert Crawford <rawford at iac.net>

More information about the Python-ideas mailing list