[Python-ideas] Async API

Guido van Rossum guido at python.org
Thu Oct 25 01:43:13 CEST 2012


On Wed, Oct 24, 2012 at 4:26 PM, Yury Selivanov <yselivanov.ml at gmail.com> wrote:
> On 2012-10-24, at 7:12 PM, Guido van Rossum <guido at python.org> wrote:
>> Ok, I can understand. But still, this is a problem with timeouts in
>> general, not just with timeouts in a yield-based environment. How does
>> e.g. Twisted deal with this?

> I don't know, I hope someone with an expertise in Twisted can tell us.
>
> But I would imagine that they don't have this particular problem, as it
> should be related only to coroutines and schedulers that run them.  I.e.
> it's a problem when you run some code and may interrupt it.  And you can't
> interrupt a plain python code that uses callbacks without yields and
> greenlets.

Well, but in the Twisted world, if a cleanup callback requires more
blocking calls, it has to spawn more deferred callbacks. So I think
they *do* have the problem, unless they don't have a way at all to
constrain the total running time of an action involving cascading
callbacks. Also, they have inlineCallbacks which does use yield.

>> As a work-around, I could imagine some kind of with-statement that
>> tells the scheduler we're already in the finally clause (it could
>> still send you a timeout if your cleanup takes way too long):
>>
>> try:
>>  yield <regular code>
>> finally:
>>  with protect_finally():
>>    yield <cleanup code>
>>
>> Of course this could be abused, but at your own risk -- the scheduler
>> only gives you a fixed amount of extra time and then it's quits.
>
> Right, that's the basic approach.  But it also gives you a feeling of
> a "broken" language feature.  I.e. we have coroutines, but we can not
> implement timeouts on top of them without making 'finally' blocks
> look ugly.  And if we assume that you can run any coroutine with a
> timeout - you'll need to use 'protect_finally' in virtually every
> 'finally' statement.

I think the problem may be with timeouts, or with doing blocking I/O
in cleanup clauses. I suspect that any system implementing timeouts
has subtle bugs.

> I solved the problem by dynamically inlining 'with protect_finally()'
> code in @coroutine decorator (something that I would never suggest to
> put in the stdlib, btw).  There is also PEP 419, but I don't like it as
> well, as it is tied to frames--two low level (and I'm not sure how it
> will work with future CPython optimizations and PyPy's JIT.)
>
> BUT, the concept is nice.  I've implemented a number of protocols with
> yield-coroutines, and managing timeouts with a simple ".with_timeout()"
> call is a very handy and readable feature.  So, I hope, that we can
> all brainstorm this problem to make coroutines "complete", if we decide
> to start using them widely.

I think the with-clause is the solution.

Note that in a world with only blocking calls this *can* be a problem
(despite your repeated claims that it's not a problem there) -- a
common approach to giving operations a timeout is sending it a SIGTERM
(which you can easily call with a signal handler in Python) when the
deadline is over, then sending it more SIGTERM signals every few
seconds until it dies, and sending SIGKILL (which can't be caught) if
it takes too long to die.

-- 
--Guido van Rossum (python.org/~guido)



More information about the Python-ideas mailing list