[Python-ideas] Async API

Guido van Rossum guido at python.org
Fri Oct 26 19:08:24 CEST 2012


On Fri, Oct 26, 2012 at 9:57 AM, Itamar Turner-Trauring
<itamar at futurefoundries.com> wrote:
>
>
> On Fri, Oct 26, 2012 at 12:36 PM, Guido van Rossum <guido at python.org> wrote:
>>
>> On Fri, Oct 26, 2012 at 8:52 AM, Laurens Van Houtven <_ at lvh.cc> wrote:
>> > err, I suppose the missing bit there is that you'll probably want to:
>> >
>> > reactor.callLater(timeout, d.cancel)
>> >
>> > As opposed to calling d.cancel() directly. (That snippet was in
>> > bpython-urwid with the reactor running in the background, but I doubt
>> > it'd
>> > work well anywhere else outside of manholes :))
>>
>> So I think that Yuri's original problem statement, transformed to
>> Twisted+Deferred, might still apply, depending on how you implement
>> it. Yuri essentially did this:
>>
>> def foobar():  # a task
>>     try:
>>         yield <blocking action>
>>     finally:
>>         # must clean up regardless of whether action succeeded or failed:
>>         yield <blocking cleanup>
>>
>> He then calls this with a timeout, with the semantics that if the
>> generator is blocked in a yield when the timeout arrives, that yield
>> raises a Timeout exception (and at no other time is Timeout raised).
>> The problem with this is that if the action succeeds within the
>> timeout, but barely, there's a chance that the cleanup of a
>> *successful* action receives the Timeout exception. Apparently this
>> bit Yuri. I'm not sure how you'd model that using just Deferreds, but
>> using inlineCallbacks it seems the same thing might happen. Using
>> Deferreds, I assume there's a common pattern to implement this that
>> doesn't have this problem. Of course, using coroutines, there is too
>> -- spawn the cleanup as an independent task.
>
>
> If you call cancel() on a Deferred that already has a result, nothing
> happens. So you don't get a TimeoutError if the operation has succeeded (or
> failed some other way). This would also be true when using inlineCallbacks,
> so there's no issue.
>
> In general I'm not clear why this is a problem: in a single-threaded program
> only one thing happens at a time. Your code for triggering a timeout always
> has the option to check if the operation has succeeded, without worrying
> about race conditions.

But the example is not single-threaded (in the informal sense that you
use it here). Each yield is a suspension point where other things can
happen, and one of those things could be a cancellation of *this* task
(because of a timeout or otherwise).

The example would have to set some flag indicating it has a result
after the first yield (i.e. before entering the finally, or at least
before yielding in the finally clause). And the timeout callback would
have to check this flag. This makes it slightly awkward to design a
general-purpose timeout mechanism for tasks written in this style --
if you expect a timeout or cancellation you must protect your cleanup
code from it by using some API.

Anyway, no need to respond: I think I understand how Twisted deals
with this, and translating that into the world of PEP 380 is not your
job.

-- 
--Guido van Rossum (python.org/~guido)



More information about the Python-ideas mailing list