[Web-SIG] Server-side async API implementation sketches

Alice Bevan–McGregor alice at gothcandy.com
Sun Jan 9 16:57:01 CET 2011


On 2011-01-09 07:04:49 -0800, 
exarkun at twistedmatrix.com said:

> Don't say it if it's not true.  Deferreds aren't tied to a reactor, and 
> Marrow doesn't appear to have anything called "deferred".  So this 
> parallel to Twisted's Deferred is misleading and confusing.

It was merely a comparison to the "you schedule something, attach some 
callbacks to it, and when it's finished your callbacks get executed" 
feature.  I did not mention Twisted; also:

:: defer - postpone: hold back to a later time; "let's postpone the exam"

:: deferred - postponed: put off until a later time; "surgery has been 
postponed"

Futures are very similar to deferreds with the one difference you 
mention: future instances are created by the executor/reactor and are 
(possibly) the internal representation instead of Twisted treating the 
Deferred as the executor in terms of registering calls.  In most other 
ways, they share the same goals, and similar methods, even.

Marrow's "deferred calls" code is buried in marrow.io, with IOStreams 
accepting callbacks as part of the standard read/write calls and 
registering these internally.  IOStream then performs read/writes 
across the raw sockets utilizing callbacks from the IOLoop reactor.  
When an IOStream meets its criteria (e.g. written all of the requested 
data, read a number of bytes >= the requested count, or read until a 
marker has appeared in the stream, e.g. \r\n) IOLoop then executes the 
callbacks registered with it, passing the data, if any.

I will likely expand this to include additional criteria and callback hooks.

IOStream, in this way, acts more like Twisted Deferreds than Futures.

> I think this effort would benefit from more thought on how exactly 
> accessing this external library support will work.  If async wsgi is 
> limited to performing a single read asynchronously, then it hardly 
> seems compelling.

There appears to be a misunderstanding over how futures work.  Please 
read PEP 3148 [1] carefully.  While there's not much there, here's the 
gist: the executor schedules the callable passed to submit.  If the 
"worker pool" is full, the underlying pooling mechanism will delay 
execution of the callable until a slot is freed.  Pool and slot are 
defined, by example only, as thread or process pools, but are not 
restricted to such.

(There are three relevant classes defined by concurrent.futures: 
Executor, ProcessPoolExecutor, and ThreadPoolExecutor.  Again, as long 
as you implement the Executor duck-typed interface, you're good to go 
and compliant to PEP 3148, regardless of underlying mechanics.)

If a "slot" is available at the moment of submission, the callable has 
a reasonable expectation of being immediately executed.  The 
future.result() method merely blocks awaiting completion of the already 
running, not yet running, or already completed future.  If already 
completed (a la the future sent back up to to the application after 
yielding it) the call to result is non-blocking / immediate.

Yielding the future is simply a way of safely "blocking" (usually done 
by calling .result() before the future is complete), not some absolute 
requirement for the future itself to run.  The future (and thus async 
socket calls et. al.) can, and should, be scheduled with the underlying 
async reactor in the call to submit().

	- Alice.

[1] http://www.python.org/dev/peps/pep-3148/




More information about the Web-SIG mailing list