[Python-ideas] Protecting finally clauses of interruptions

Yury Selivanov yselivanov.ml at gmail.com
Tue Apr 3 16:09:07 CEST 2012


On 2012-04-03, at 3:16 AM, Paul Colomiets wrote:
> 1. For yield-based coroutines you must inspect stack
> anyway, since interpreter doesn't have a stack, you
> build it yourself (although, I don't know how `yield from`
> changes that)
> 
> 2. For greenlet based coroutines it is unclear what
> the stack is. For example:
> 
> def f1():
>    try:
>        pass
>    finally:
>        g1.switch()
> 
> def f2():
>    sleep(1.0)
> 
> g1 = greenlet(f1)
> g2 = greenlet(f2)
> g1.switch()
> 
> Is it safe to interrupt g2 while it's in `sleep`? (If you wonder
> how I fix this problem with f_in_finally stack, it's easy. I
> usually switch to a coroutine from trampoline, so this is
> a boundary of the stack which should be checked for
> f_in_finally).

Wait.  So you're tracing the whole coroutine execution stack to
check if the current coroutine was called in a finally block of
some other coroutine?  For handling timeouts I don't think that
is necessary (maybe there are other use cases?)

In the example below you actually have to interrupt g2:

def g1():
   try:
     ...
   finally:
     g2().with_timeout(0.1)

def g2():
   sleep(2)

You shouldn't guarantee that the *whole* chain of functions/
coroutines/etc will be safe in their finally statements, you just 
need to protect the top coroutines in the timeouts queue.

Hence, in the above example, if you run g1() with a timeout, the
trampoline should ensure that it won't interrupt it while it is
in its finally block.  But it can interrupt g2() in any context
at any point of its execution.  And if g2() gets interrupted,
g1()'s finally statement will be broken, yes.  But that's the
responsibility of the developer to ensure that the code in
'finally' handles exceptions within it correctly.

That's just my approach to handle timeouts, I'm not advocating
it to be the very right one.

Are there any other use-cases when you have to inspect the
execution stack?  Because if there is no, 'interrupt()' method
is sufficient and implementable, as both generators and
greenlets are well aware about the code frames they holding.

> 3. For threads it was discussed several times and rejected.
> This proposal may make thread interruptions slightly safer,
> but I'm not sure it's enough to convince people.

That's why I'm advocating for a PEP.  Thread interruption isn't
a safe feature in the .NET CLR either.  You may break things with 
it there too.  And it doesn't protect the chain of functions 
calling each other from their 'finally' statements, it just 
protects the top frame.  The 'abort' and 'interrupt' methods 
aren't advertised to be used in .NET, use them at your own risk.

So I don't think that we can, or should ensure 100% safety when
interrupting a thread.  And that's why I think it is worth to
propose a mechanism that will work for many concurrency 
primitives.

> So I still propose add a frame flag, which doesn't break
> anything, and gives us experiment with interruptions
> without putting some experimental code into the core.


There are cons and pros in your solution.

Pros
----

- can be used right away in coroutine libraries.

- somewhat simple and small CPython patch.

Cons
----

- you have to work with frames almost throughout the execution
of the program.  In PyPy you simply will have the JIT disabled.
And I'm not sure how frame access works in Jython and IronPython
from the performance point of view.

- no mechanism for interrupting a running thread.  In almost any
coroutine library you will have a thread pool, and sometimes you
need a way to interrupt workers.  So it's not enough even for
coroutines.

-
Yury



More information about the Python-ideas mailing list