[Python-ideas] Async API

Yury Selivanov yselivanov.ml at gmail.com
Thu Oct 25 04:25:16 CEST 2012


Greg,

On 2012-10-24, at 9:29 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:

> On 25/10/12 14:07, Yury Selivanov wrote:
>> Right.  And that rolling back - a tiny db query "rollback" - is an
>> async code,
> 
> Only if we implement it as a blocking operation as far as our
> task scheduler is concerned. I wouldn't do it that way -- I'd
> perform it synchronously and assume it'll be fast enough for
> that not to be a problem.

In a non-blocking application there is no way of running a blocking code,
even if it's anticipated to block for a mere millisecond.  Because if something
gets out of control and it blocks for a longer period of time - everything
just stops, right?  

Or did you mean something else with "synchronously" (perhaps Steve Dower's 
approach)?

> BTW, we seem to be using different definitions for the term
> "query". To my way of thinking, a rollback is *not* a query,
> even if it happens to be triggered by sending a "rollback"
> command to the SQL interpreter. At the Python API level,
> it should appear as a distinct operation with its own
> method.

Right.  I meant that "sending a rollback command to the SQL interpreter"
part--this should be done through a non-blocking socket.  To invoke an 
operation on a non-blocking socket we have to do it through 'yield' or 
'yield from', hence - give scheduler a chance to interrupt the coroutine.

Given the fact that we know, that the clean-up code should be simple and fast,
it still contains coroutine context switches in real world code, be it due to 
the need of sending some information via a socket, or just by calling some other 
coroutine.  If you write a single 'yield' in your finally block, and that (or 
caller) coroutine is called with a timeout, there is a chance that its 'finally' 
block execution will be aborted by a scheduler.  Writing this yield/non-blocking 
type of code in finally blocks is a necessity, unfortunately.  And even if that 
cleanup code is incredibly fast, if you have a webserver that runs for 
days/weeks/months, bad things will happen.

So if we decide to adopt Guido's approach with explicitly marking critical
finally blocks (well, they are all critical) with 'with protected_finally()'
- allright.  If we somehow invent a mechanism that would allow us to hide
this all from user and protect finally blocks implicitly in scheduler - that's
even better.

Or we should design a totally different approach of handling timeouts, and try
to not to interrupt coroutines at all.

-
Yury


More information about the Python-ideas mailing list