[Python-3000] python-safethread project status

Adam Olsen rhamph at gmail.com
Tue Mar 18 21:32:55 CET 2008


On Tue, Mar 18, 2008 at 2:04 PM, Marcin 'Qrczak' Kowalczyk
<qrczak at knm.org.pl> wrote:
> Dnia 18-03-2008, Wt o godzinie 13:37 -0600, Adam Olsen pisze:
>
>
>  > What sort of blocking wait do you use?  Or did you mean you don't have one?
>
>  I meant that I don't have one, I only have iteration over the whole
>  queue, which is equivalent to having trypop. Which prompts the question:
>  what are use cases of pop & wait, i.e. of blocking until the queue is
>  not empty?

The finalizer thread blocks until something's been deleted.  I may
need a better API if I (eventually) switch to a threadpool for
finalizers though.


>  I used to have weakref callbacks. They must be synchronized with the
>  rest of the program (unless their operations on shared data are already
>  atomic). Then I realized that a design with deathqueues suffices for
>  cases like WeakKeyDictionary, and it is not less convenient because
>  the queue can be checked in exactly those places where the design with
>  callbacks would lock the shared data.
>
>  Regarding interrupts, I have a more involved scheme with threads
>  blocking and unblocking interrupts, with a counter of the number of
>  blocks, with a special synchronous mode for being interruptible at
>  blocking operations only (similar to PTHREAD_CANCEL_DEFERRED, but the
>  default is PTHREAD_CANCEL_ASYNCHRONOUS), and with signals which don't
>  necessarily cause an exception but they execute a signal handler which
>  often throws an exception.
>
>  I've seen a simpler interrupt blocking scheme being proposed for Python,
>  based on the design in Haskell:
>    http://www.cs.williams.edu/~freund/papers/python.pdf
>  but it seems to have been abandoned (the paper is from 2002 and I
>  haven't heard of anything newer).

I think deferred is a vastly better default.  To translate for
everybody else, deferred in this context means "set a flag that I/O
functions can check".  It means you'll only get a Cancelled exception
at documented points.

In contrast, asynchronous may raise an exception at totally arbitrary
points.  Maybe in the finally portion of a try/finally block, maybe
when you were about to call os.close().  It's not something you can
rely on.

Note though, my cancellation is *not* a signal handler mechanism!
There's a dedicated signal handler thread in which to process signals.
 Cancellation is only the last ditch, "abandon all hope - start
tearing down the program (or at least part of it)" option.

Incidentally, although I currently only support "deferred"
cancellation, I'd eventually like to add some sort of option for
"asynchronous" (what I tend to call "forced") cancellation.  It could
be used in what you know is a simple CPU bound math loop
(10**10**10?), or as a backup in the interactive interpreter.  The
interactive interpreter needs to be modified to use this all properly
anyway though, so I'm in no hurry.

-- 
Adam Olsen, aka Rhamphoryncus


More information about the Python-3000 mailing list