[Python-ideas] Protecting finally clauses of interruptions
paul at colomiets.name
Wed Apr 4 10:04:04 CEST 2012
On Wed, Apr 4, 2012 at 4:23 AM, Yury Selivanov <yselivanov.ml at gmail.com> wrote:
> On 2012-04-03, at 3:22 PM, Paul Colomiets wrote:
>> (Although, I don't know how `yield from` changes working with
>> yield-based coroutines, may be it's behavior is quite different)
>> For greenlets situation is a bit different, as Python knows the
>> stack there, but you still need to traverse it (or as Andrew
>> mentioned, you can just propagate flag).
> Why traverse? Why propagate? As I explained in my previous posts
> here, you need to protect only the top-stack coroutines in the
> timeouts or trampoline execution queues. You should illustrate
> your logic with a more clear example - say three or four coroutines
> that call each other + with a glimpse of how your trampoline works.
> But I'm not sure that is really necessary.
Here is more detailed previous example (although, still simplified):
def add_money(user_id, money):
yield redis_incr('user:'+user_id+':money', money)
# this one is crucial to show the point of discusssion
# other function are similar:
yield redis_socket.wait_write() # yields back when socket is
ready for writing
cmd = ('DEL user:'+lock+'\n').encode('ascii')
redis_socket.write(cmd) # should be loop here, actually
result = redis_socket.read(1024) # here loop too
assert result == 'OK\n'
The trampoline when gets coroutine from `next()` or `send()` method
puts it on top of stack and doesn't dispatch original one until topmost
one is exited.
The point is that if timeout arrives inside a `redis_unlock` function, we
must wait until finally from `add_user` is finished
>> The whole intention of using coroutine library is to not to
>> have thread pool. Could you describe your use case
>> with more details?
> Well, our company has been using coroutines for like 2.5 years
> now (the framework in not yet opensourced). And in our practice
> threadpool is really handy, as it allows you to:
> - use non-asyncronous libraries, which you don't want to
> monkeypatch with greensockets (or even unable to mokeypatch)
And we rewrite them in python. It seems to be more useful.
> - wrap some functions that are usually very fast, but once in
> a while may take some time. And sometimes you don't want to
> offload them to a separate process
> - and yes, do DNS lookups if you don't have a compiled cpython
> extension that wraps c-ares or something alike.
Maybe let's propose asynchronous DNS library for python?
We have same problem, although we do not resolve hosts at
runtime (only at startup) so synchronous one is well enough
for our purposes.
> Please let's avoid shifting further discussion to proving or
> disproving the necessity of threadpools.
> They are being actively used and there is a demand for
> (more or less) graceful threads interruption or abortion.
Given use cases, what stops you to make explicit
> Please write a PEP and we'll continue discussion from that
> point. Hopefully, it will get more attention than this thread.
I don't see the point in writing a PEP until I have an idea
what PEP should propose. If you have, you can do it. Again
you want to implement thread interruption, and that's not
my point, there is another thread for that.
On Wed, Apr 4, 2012 at 3:03 AM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> I don't think a frame flag on its own is quite enough.
> You don't just want to prevent interruptions while in
> a finally block, you want to defer them until the finally
> counter gets back to zero. Making the interrupter sleep
> and try again in that situation is rather ugly.
> So perhaps there could also be a callback that gets
> invoked when the counter goes down to zero.
Do you mean put callback in a frame, which get
executed at next bytecode just like signal handler,
except it waits until finally clause is executed?
I would work, except in may have light performance
impact on each bytecode. But I'm not sure if it will
More information about the Python-ideas