[Python-3000] python-safethread project status

Adam Olsen rhamph at gmail.com
Tue Mar 18 23:22:36 CET 2008


On Tue, Mar 18, 2008 at 4:04 PM, Marcin 'Qrczak' Kowalczyk
<qrczak at knm.org.pl> wrote:
> Dnia 18-03-2008, Wt o godzinie 14:32 -0600, Adam Olsen pisze:
>
>
>  > The finalizer thread blocks until something's been deleted.
>
>  Ok. If this is the only use case, I feel quite safe not having this,
>  because my finalizers are implemented independently (and a finalizer
>  thread is created on demand, which is cheap there).

Essentially, your "spawn a thread to handle this task" function serves
the same purpose.  As I said, I'd likely need to redesign if I had a
thread pool.  The details here aren't too important.


>  > I think deferred is a vastly better default.  To translate for
>  > everybody else, deferred in this context means "set a flag that I/O
>  > functions can check".  It means you'll only get a Cancelled exception
>  > at documented points.
>
>  Well, if ^C uses the same mechanism, which is the case in my language,
>  then this means that ^C can only interrupt I/O, or synchronization with
>  other threads, or sleep, but not a long pure computation.
>
>  The choice of the default blocking state depends on the programming
>  style. The more shared data is mutated, the more tempting is to default
>  to deferred interrupts. The more functional style is, the safer are
>  asynchronous interrupts. Also defaulting to asynchronous interrupts
>  needs various language functions and constructs to block interrupts
>  implicitly, so code does not have to deal with interrupts all the time.
>  I agree that defaulting to asynchronous interrupts is risky, and might
>  be too demanding from library authors.

I'd tend to assume only *purely* functional languages should have
asynchronous interrupts.  Any imperative language with them is
suspect.

Search for info on java's deprecated Thread.stop() if you're not
already familiar with the problems it has.


>  In reality whether a particular interrupt is safe to be processed
>  could depend on the kind of interrupt (e.g. even if a locally handled
>  interrupt should be blocked to avoid corrupting a data structure,
>  an interrupt which exits the whole program would be safe since the data
>  structure is dead then anyway). I have not considered such design in a
>  programming language. Looks too complex for its benefits.

In my case interrupt/cancellation raises an exception.  Doing so
asynchronously could have any manor of bad effects, effectively
eliminating the ability shut down gracefully.  If I wanted that I'd
just kill it from C (_exit(), abort(), SIGKILL, etc.)


>  > In contrast, asynchronous may raise an exception at totally arbitrary
>  > points.  Maybe in the finally portion of a try/finally block,
>
>  No, because my language blocks interrupts automatically there :-)
>
>  They are also blocked e.g. when a mutex is locked, or when a module is
>  being imported.

You mean the *entire* time a mutex is held?  That wouldn't work for
monitors, as they expect to hold their lock for extended periods of
time.

In fact, there's some implicit MonitorSpace's who's locks are very
likely to be held for the entire run of the program.

-- 
Adam Olsen, aka Rhamphoryncus


More information about the Python-3000 mailing list