[Python-ideas] Python needs a standard asynchronous return object
tristanz at gmail.com
Thu Sep 23 06:41:19 CEST 2010
I'm not an expert on this subject by any stretch, but have been
following the discussion with interest.
One of the more interesting ideas out of Microsoft in the last few
years is their Reactive Framework
implements IObserver and IObservable as the dual to IEnumerator and
IEnumerable. This makes operators on events just as composable as
operators on enumerables. It also comes after several other attempts
to formalize a standard async programming pattern. The ideas seam
approach as well.
The basic interface is very simple, consisting of a subscribe method
on IObservable and on_next, on_completed, and on_error methods for
IObserver. The power comes from the extension methods, similar to
itertools, defined in the Observable class (http://bit.ly/acBhbP).
These methods provide a huge range of composable functionality.
For instance, using a chaining style, consider a async webclient
module that takes a bunch of urls:
responses = webclient.get(['http://www1.cnn.com', 'http://www2.cnn.com'])
responses.filter(lambda x: x.status == 200).first().do(lambda x: print(x.body))
The filter is nonblocking and returns another observable. The first()
blocks and returns after the first document is received. The do calls
a method. Multiple async streams can be composed together in all sorts
of ways. For instance,
http = webclient.get(['http://www.cnn.com', 'http://www.nyt.com'])
https = webclient.get(['https://www.cnn.com', 'https://www.nyt.com'])
http.zip(https).filter(lambda x, y: x.status == 200 and y.status ==
200).start(lambda x, y: slow_save(x, y))
This never blocks. It downloads both the https and http versions of
web pages, zips them into a new observable, filters sites with both
http and https, and then saves asynchronously the remaining sites. I
personally find this easy to reason about, and much easier than
manually specifying a callback chain. Errors and completed events
propagate through these chains intuitively. "Marble diagrams" help
with intuition here (http://bit.ly/cl7Oad).
All you need to do is implement the observable interface and you get
all the composibility for free. Or you can just use any number of
simple methods to convert things to observables
(http://bit.ly/7VMnKv), such as observable.start(lambda: print("hi")).
Or use decorators. If the observable interface became standard, all
future async libraries would be composable, and their would also be a
growing collection of observabletools.
As somebody who is new to async programming, I quite quickly grasped
this reactive approach even though I was otherwise completely
unfamiliar with C#. While it may be due to my lack of experience, I
still get confused when thinking about callback chains and error
channels. For instance, I have no idea how to zip an async http call
and a mongodb call into a simple observable that returns a tuple when
both respond and then alerts the user. This would be as simple as
or maybe it's more pythonic to write
although I've never like this inside out style.
But perhaps I missed the point of this thread?
On Wed, Sep 22, 2010 at 6:31 PM, Cameron Simpson <cs at zip.com.au> wrote:
> On 20Sep2010 15:41, James Yonan <james at openvpn.net> wrote:
> | * Develop a full-featured standard async result type and reactor
> | model to facilitate interoperability of different async libraries.
> | This would consist of a standard async result type and an abstract
> | base class for a reactor model.
> | * Let PEP 3148 focus on the problem of thread and process pooling
> | and leverage on the above async result type.
> | The semantics that a general async type should support include:
> | 1. Semantics that allow you to define a callback channel for results
> | and and optionally a separate channel for exceptions as well.
> | 2. Semantics that offer the flexibility of working with async
> | results at the callback level or at the generator level (having a
> | separate channel for exceptions makes it easy for the generator
> | decorator implementation (that facilitates "yield
> | function_returning_async_object()") to dispatch exceptions into the
> | caller).
> | 3. Semantics that can easily be used to pass results and exceptions
> | back from thread or process pools.
> Just to address this particular aspect (return types and notification),
> I have my own futures-like module, where the equivalent of a Future is
> called a LateFunction.
> There are only 3 basic types of return in my model:
> there's a .report() method in the main (Executor equivalent) class
> that yields LateFunctions as they complete.
> A LateFunction has two basic get-the result methods. Having made a
> LF = Later.defer(func)
> You can either go:
> result = LF()
> This waits for func's ompletion and returns func's return value.
> If func raises an exception, this raises that exception.
> Or you can go:
> result, exc_info = LF.wait()
> which returns:
> result, None
> if func completed without exception and
> None, exc_info
> if an exception was raised, where exc_info is a 3-tuple as from
> At any rate, when looking for completion you can either get
> LateFunctions as they complete via .report(), or function results plain
> (that may raise exceptions) or function (results xor exceptions).
> This makes implementing the separate streams (results vs exceptions) models
> trivial if it is desired while keeping the LateFunction interface simple
> (few interface methods).
> Yes, I know there's no timeout stuff in there :-(
> Cameron Simpson <cs at zip.com.au> DoD#743
> By God, Mr. Chairman, at this moment I stand astonished at my own moderation!
> - Baron Robert Clive of Plassey
> Python-ideas mailing list
> Python-ideas at python.org
More information about the Python-ideas