Hi Nick,

When you say "high latency" (in __exit__), what does "high" mean? Is that order of magnitude what __exit__ usually means now, or network IO included?

(Use case: distributed locking and remotely stored locks: it doesn't take a long time on network scales, but it can take a long time on CPU scales.)

On Sun, Jan 6, 2013 at 10:06 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Sun, Jan 6, 2013 at 5:23 AM, Guido van Rossum <guido@python.org> wrote:
> Possibly (though it will have to be a separate PEP -- PEP 3156 needs
> to be able to run on unchanged Python 3.3). Does anyone on this thread
> have enough understanding of the implementation of context managers
> and generators to be able to figure out how this could be specified
> and implemented (or to explain why it is a bad idea, or impossible)?

There aren't any syntax changes needed to implement asynchronous
locks, since they're unlikely to experience high latency in __exit__.
For that and similar cases, it's enough to use an asynchronous
operation to retrieve the CM in the first place (i.e. acquire in
__iter__ rather than __enter__) or else have __enter__ produce a
Future that acquires the lock in __iter__ (see

The real challenge is in handling something like an asynchronous
database transaction, which will need to yield on __exit__ as it
commits or rolls back the database transaction. At the moment, the
only solutions for that are to switch to a synchronous-to-asynchronous
adapter like gevent or else write out the try/except block and avoid
using the with statement.

It's not an impossible problem, just a tricky one to solve in a
readable fashion. Some possible constraints on the problem space:

- any syntactic solution should work for at least "for" statements and
"with" statements
- also working for comprehensions is highly desirable
- syntactic ambiguity with currently legal constructs should be
avoided. Even if the compiler can figure it out, large behavioural
changes due to a subtle difference in syntax should be avoided because
they're hard for *humans* to read

For example:

    # Synchronous
    for x in y:   # Invokes _iter = iter(y) and _iter.__next__()
    for x in yielding y:   # Invokes _iter = yield from iter(y) and
yield from _iter.__next__()

    # Synchronous
    with x as y:   # Invokes _cm = x, y = _cm.__enter__() and
    with yielding x as y:   # Invokes _cm = x, y = yield from
_cm.__enter__() and yield from _cm.__exit__(*args)

A new keyword like "yielding" would make it explicit that what is
going on differs from a (yield x) or (yield from x) in the
corresponding expression slot.

Approaches with function level granularity may also be of interest -
PEP 3152 is largely an exploration of that idea (but would need
adjustments in light of PEP 3156)

Somewhat related, there's also a case to be made that "yield from x"
should fall back to being equivalent to "x()" if x implements __call__
but not __iter__. That way, async ready code can be written using
"yield from", but passing in a pre-canned result via lambda or
functools.partial would no longer require a separate operation that
just adapts the asynchronous call API (i.e. __iter__) to the
synchronous call one (i.e. __call__):

    def async_call(f):
        def _sync(*args, **kwds):
            return f(*args, **kwds)
            yield # Force this to be a generator
        return _iterable_call

The argument against, of course, is the ease with which this can lead
to a "wrong answer" problem where the exception gets thrown a long way
from the erroneous code which left out the parens for the function


Nick Coghlan   |   ncoghlan@gmail.com   |   Brisbane, Australia
Python-ideas mailing list