PEP 492: async/await in Python; v3

Hi python-dev, Another round of updates. Reference implementation has been updated: https://github.com/1st1/cpython/tree/await (includes all things from the below summary of updates + tests). Summary: 1. "PyTypeObject.tp_await" slot. Replaces "tp_reserved". This is to enable implementation of Futures with C API. Must return an iterator if implemented. 2. New grammar for "await" expressions, see 'Syntax of "await" expression' section 3. inspect.iscoroutine() and inspect.iscoroutineobjects() functions. 4. Full separation of coroutines and generators. This is a big one; let's discuss. a) Coroutine objects raise TypeError (is NotImplementedError better?) in their __iter__ and __next__. Therefore it's not not possible to pass them to iter(), tuple(), next() and other similar functions that work with iterables. b) Because of (a), for..in iteration also does not work on coroutines anymore. c) 'yield from' only accept coroutine objects from generators decorated with 'types.coroutine'. That means that existing asyncio generator-based coroutines will happily yield from both coroutines and generators. *But* every generator-based coroutine *must* be decorated with `asyncio.coroutine()`. This is potentially a backwards incompatible change. d) inspect.isgenerator() and inspect.isgeneratorfunction() return `False` for coroutine objects & coroutine functions. e) Should we add a coroutine ABC (for cython etc)? I, personally, think this is highly necessary. First, separation of coroutines from generators is extremely important. One day there won't be generator-based coroutines, and we want to avoid any kind of confusion. Second, we only can do this in 3.5. This kind of semantics change won't be ever possible. asyncio recommends using @coroutine decorator, and most projects that I've seen do use it. Also there is no reason for people to use iter() and next() functions on coroutines when writing asyncio code. I doubt that this will cause serious backwards compatibility problems (asyncio also has provisional status). Thank you, Yury PEP: 492 Title: Coroutines with async and await syntax Version: $Revision$ Last-Modified: $Date$ Author: Yury Selivanov <yselivanov@sprymix.com> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 09-Apr-2015 Python-Version: 3.5 Post-History: 17-Apr-2015, 21-Apr-2015, 27-Apr-2015 Abstract ======== This PEP introduces new syntax for coroutines, asynchronous ``with`` statements and ``for`` loops. The main motivation behind this proposal is to streamline writing and maintaining asynchronous code, as well as to simplify previously hard to implement code patterns. Rationale and Goals =================== Current Python supports implementing coroutines via generators (PEP 342), further enhanced by the ``yield from`` syntax introduced in PEP 380. This approach has a number of shortcomings: * it is easy to confuse coroutines with regular generators, since they share the same syntax; async libraries often attempt to alleviate this by using decorators (e.g. ``@asyncio.coroutine`` [1]_); * it is not possible to natively define a coroutine which has no ``yield`` or ``yield from`` statements, again requiring the use of decorators to fix potential refactoring issues; * support for asynchronous calls is limited to expressions where ``yield`` is allowed syntactically, limiting the usefulness of syntactic features, such as ``with`` and ``for`` statements. This proposal makes coroutines a native Python language feature, and clearly separates them from generators. This removes generator/coroutine ambiguity, and makes it possible to reliably define coroutines without reliance on a specific library. This also enables linters and IDEs to improve static code analysis and refactoring. Native coroutines and the associated new syntax features make it possible to define context manager and iteration protocols in asynchronous terms. As shown later in this proposal, the new ``async with`` statement lets Python programs perform asynchronous calls when entering and exiting a runtime context, and the new ``async for`` statement makes it possible to perform asynchronous calls in iterators. Specification ============= This proposal introduces new syntax and semantics to enhance coroutine support in Python, it does not change the internal implementation of coroutines, which are still based on generators. It is strongly suggested that the reader understands how coroutines are implemented in Python (PEP 342 and PEP 380). It is also recommended to read PEP 3156 (asyncio framework) and PEP 3152 (Cofunctions). From this point in this document we use the word *coroutine* to refer to functions declared using the new syntax. *generator-based coroutine* is used where necessary to refer to coroutines that are based on generator syntax. New Coroutine Declaration Syntax -------------------------------- The following new syntax is used to declare a coroutine:: async def read_data(db): pass Key properties of coroutines: * ``async def`` functions are always coroutines, even if they do not contain ``await`` expressions. * It is a ``SyntaxError`` to have ``yield`` or ``yield from`` expressions in an ``async`` function. * Internally, a new code object flag - ``CO_COROUTINE`` - is introduced to enable runtime detection of coroutines (and migrating existing code). All coroutines have both ``CO_COROUTINE`` and ``CO_GENERATOR`` flags set. * Regular generators, when called, return a *generator object*; similarly, coroutines return a *coroutine object*. * ``StopIteration`` exceptions are not propagated out of coroutines, and are replaced with a ``RuntimeError``. For regular generators such behavior requires a future import (see PEP 479). types.coroutine() ----------------- A new function ``coroutine(gen)`` is added to the ``types`` module. It applies ``CO_COROUTINE`` flag to the passed generator-function's code object, making it to return a *coroutine object* when called. This feature enables an easy upgrade path for existing libraries. Await Expression ---------------- The following new ``await`` expression is used to obtain a result of coroutine execution:: async def read_data(db): data = await db.fetch('SELECT ...') ... ``await``, similarly to ``yield from``, suspends execution of ``read_data`` coroutine until ``db.fetch`` *awaitable* completes and returns the result data. It uses the ``yield from`` implementation with an extra step of validating its argument. ``await`` only accepts an *awaitable*, which can be one of: * A *coroutine object* returned from a *coroutine* or a generator decorated with ``types.coroutine()``. * An object with an ``__await__`` method returning an iterator. Any ``yield from`` chain of calls ends with a ``yield``. This is a fundamental mechanism of how *Futures* are implemented. Since, internally, coroutines are a special kind of generators, every ``await`` is suspended by a ``yield`` somewhere down the chain of ``await`` calls (please refer to PEP 3156 for a detailed explanation.) To enable this behavior for coroutines, a new magic method called ``__await__`` is added. In asyncio, for instance, to enable Future objects in ``await`` statements, the only change is to add ``__await__ = __iter__`` line to ``asyncio.Future`` class. Objects with ``__await__`` method are called *Future-like* objects in the rest of this PEP. Also, please note that ``__aiter__`` method (see its definition below) cannot be used for this purpose. It is a different protocol, and would be like using ``__iter__`` instead of ``__call__`` for regular callables. It is a ``TypeError`` if ``__await__`` returns anything but an iterator. * Objects defined with CPython C API with a ``tp_await`` function, returning an iterator (similar to ``__await__`` method). It is a ``SyntaxError`` to use ``await`` outside of a coroutine. It is a ``TypeError`` to pass anything other than an *awaitable* object to an ``await`` expression. Syntax of "await" expression '''''''''''''''''''''''''''' ``await`` keyword is defined differently from ``yield`` and ``yield from``. The main difference is that *await expressions* do not require parentheses around them most of the times. Examples:: ================================== ================================== Expression Will be parsed as ================================== ================================== ``if await fut: pass`` ``if (await fut): pass`` ``if await fut + 1: pass`` ``if (await fut) + 1: pass`` ``pair = await fut, 'spam'`` ``pair = (await fut), 'spam'`` ``with await fut, open(): pass`` ``with (await fut), open(): pass`` ``await foo()['spam'].baz()()`` ``await ( foo()['spam'].baz()() )`` ``return await coro()`` ``return ( await coro() )`` ``res = await coro() ** 2`` ``res = (await coro()) ** 2`` ``func(a1=await coro(), a2=0)`` ``func(a1=(await coro()), a2=0)`` ================================== ================================== See `Grammar Updates`_ section for details. Asynchronous Context Managers and "async with" ---------------------------------------------- An *asynchronous context manager* is a context manager that is able to suspend execution in its *enter* and *exit* methods. To make this possible, a new protocol for asynchronous context managers is proposed. Two new magic methods are added: ``__aenter__`` and ``__aexit__``. Both must return an *awaitable*. An example of an asynchronous context manager:: class AsyncContextManager: async def __aenter__(self): await log('entering context') async def __aexit__(self, exc_type, exc, tb): await log('exiting context') New Syntax '''''''''' A new statement for asynchronous context managers is proposed:: async with EXPR as VAR: BLOCK which is semantically equivalent to:: mgr = (EXPR) aexit = type(mgr).__aexit__ aenter = type(mgr).__aenter__(mgr) exc = True try: try: VAR = await aenter BLOCK except: exc = False exit_res = await aexit(mgr, *sys.exc_info()) if not exit_res: raise finally: if exc: await aexit(mgr, None, None, None) As with regular ``with`` statements, it is possible to specify multiple context managers in a single ``async with`` statement. It is an error to pass a regular context manager without ``__aenter__`` and ``__aexit__`` methods to ``async with``. It is a ``SyntaxError`` to use ``async with`` outside of a coroutine. Example ''''''' With asynchronous context managers it is easy to implement proper database transaction managers for coroutines:: async def commit(session, data): ... async with session.transaction(): ... await session.update(data) ... Code that needs locking also looks lighter:: async with lock: ... instead of:: with (yield from lock): ... Asynchronous Iterators and "async for" -------------------------------------- An *asynchronous iterable* is able to call asynchronous code in its *iter* implementation, and *asynchronous iterator* can call asynchronous code in its *next* method. To support asynchronous iteration: 1. An object must implement an ``__aiter__`` method returning an *awaitable* resulting in an *asynchronous iterator object*. 2. An *asynchronous iterator object* must implement an ``__anext__`` method returning an *awaitable*. 3. To stop iteration ``__anext__`` must raise a ``StopAsyncIteration`` exception. An example of asynchronous iterable:: class AsyncIterable: async def __aiter__(self): return self async def __anext__(self): data = await self.fetch_data() if data: return data else: raise StopAsyncIteration async def fetch_data(self): ... New Syntax '''''''''' A new statement for iterating through asynchronous iterators is proposed:: async for TARGET in ITER: BLOCK else: BLOCK2 which is semantically equivalent to:: iter = (ITER) iter = await type(iter).__aiter__(iter) running = True while running: try: TARGET = await type(iter).__anext__(iter) except StopAsyncIteration: running = False else: BLOCK else: BLOCK2 It is a ``TypeError`` to pass a regular iterable without ``__aiter__`` method to ``async for``. It is a ``SyntaxError`` to use ``async for`` outside of a coroutine. As for with regular ``for`` statement, ``async for`` has an optional ``else`` clause. Example 1 ''''''''' With asynchronous iteration protocol it is possible to asynchronously buffer data during iteration:: async for data in cursor: ... Where ``cursor`` is an asynchronous iterator that prefetches ``N`` rows of data from a database after every ``N`` iterations. The following code illustrates new asynchronous iteration protocol:: class Cursor: def __init__(self): self.buffer = collections.deque() def _prefetch(self): ... async def __aiter__(self): return self async def __anext__(self): if not self.buffer: self.buffer = await self._prefetch() if not self.buffer: raise StopAsyncIteration return self.buffer.popleft() then the ``Cursor`` class can be used as follows:: async for row in Cursor(): print(row) which would be equivalent to the following code:: i = await Cursor().__aiter__() while True: try: row = await i.__anext__() except StopAsyncIteration: break else: print(row) Example 2 ''''''''' The following is a utility class that transforms a regular iterable to an asynchronous one. While this is not a very useful thing to do, the code illustrates the relationship between regular and asynchronous iterators. :: class AsyncIteratorWrapper: def __init__(self, obj): self._it = iter(obj) async def __aiter__(self): return self async def __anext__(self): try: value = next(self._it) except StopIteration: raise StopAsyncIteration return value async for letter in AsyncIteratorWrapper("abc"): print(letter) Why StopAsyncIteration? ''''''''''''''''''''''' Coroutines are still based on generators internally. So, before PEP 479, there was no fundamental difference between :: def g1(): yield from fut return 'spam' and :: def g2(): yield from fut raise StopIteration('spam') And since PEP 479 is accepted and enabled by default for coroutines, the following example will have its ``StopIteration`` wrapped into a ``RuntimeError`` :: async def a1(): await fut raise StopIteration('spam') The only way to tell the outside code that the iteration has ended is to raise something other than ``StopIteration``. Therefore, a new built-in exception class ``StopAsyncIteration`` was added. Moreover, with semantics from PEP 479, all ``StopIteration`` exceptions raised in coroutines are wrapped in ``RuntimeError``. Debugging Features ------------------ One of the most frequent mistakes that people make when using generators as coroutines is forgetting to use ``yield from``:: @asyncio.coroutine def useful(): asyncio.sleep(1) # this will do noting without 'yield from' For debugging this kind of mistakes there is a special debug mode in asyncio, in which ``@coroutine`` decorator wraps all functions with a special object with a destructor logging a warning. Whenever a wrapped generator gets garbage collected, a detailed logging message is generated with information about where exactly the decorator function was defined, stack trace of where it was collected, etc. Wrapper object also provides a convenient ``__repr__`` function with detailed information about the generator. The only problem is how to enable these debug capabilities. Since debug facilities should be a no-op in production mode, ``@coroutine`` decorator makes the decision of whether to wrap or not to wrap based on an OS environment variable ``PYTHONASYNCIODEBUG``. This way it is possible to run asyncio programs with asyncio's own functions instrumented. ``EventLoop.set_debug``, a different debug facility, has no impact on ``@coroutine`` decorator's behavior. With this proposal, coroutines is a native, distinct from generators, concept. New methods ``set_coroutine_wrapper`` and ``get_coroutine_wrapper`` are added to the ``sys`` module, with which frameworks can provide advanced debugging facilities. It is also important to make coroutines as fast and efficient as possible, therefore there are no debug features enabled by default. Example:: async def debug_me(): await asyncio.sleep(1) def async_debug_wrap(generator): return asyncio.CoroWrapper(generator) sys.set_coroutine_wrapper(async_debug_wrap) debug_me() # <- this line will likely GC the coroutine object and # trigger asyncio.CoroWrapper's code. assert isinstance(debug_me(), asyncio.CoroWrapper) sys.set_coroutine_wrapper(None) # <- this unsets any # previously set wrapper assert not isinstance(debug_me(), asyncio.CoroWrapper) If ``sys.set_coroutine_wrapper()`` is called twice, the new wrapper replaces the previous wrapper. ``sys.set_coroutine_wrapper(None)`` unsets the wrapper. inspect.iscoroutine() and inspect.iscoroutineobject() ----------------------------------------------------- Two new functions are added to the ``inspect`` module: * ``inspect.iscoroutine(obj)`` returns ``True`` if ``obj`` is a coroutine object. * ``inspect.iscoroutinefunction(obj)`` returns ``True`` is ``obj`` is a coroutine function. Differences between coroutines and generators --------------------------------------------- A great effort has been made to make sure that coroutines and generators are separate concepts: 1. Coroutine objects do not implement ``__iter__`` and ``__next__`` methods. Therefore they cannot be iterated over or passed to ``iter()``, ``list()``, ``tuple()`` and other built-ins. They also cannot be used in a ``for..in`` loop. 2. ``yield from`` does not accept coroutine objects (unless it is used in a generator-based coroutine decorated with ``types.coroutine``.) 3. ``yield from`` does not accept coroutine objects from plain Python generators (*not* generator-based coroutines.) 4. ``inspect.isgenerator()`` and ``inspect.isgeneratorfunction()`` return ``False`` for coroutine objects and coroutine functions. Coroutine objects ----------------- Coroutines are based on generators internally, thus they share the implementation. Similarly to generator objects, coroutine objects have ``throw``, ``send`` and ``close`` methods. ``StopIteration`` and ``GeneratorExit`` play the same role for coroutine objects (although PEP 479 is enabled by default for coroutines). Glossary ======== :Coroutine: A coroutine function, or just "coroutine", is declared with ``async def``. It uses ``await`` and ``return value``; see `New Coroutine Declaration Syntax`_ for details. :Coroutine object: Returned from a coroutine function. See `Await Expression`_ for details. :Future-like object: An object with an ``__await__`` method, or a C object with ``tp_await`` function, returning an iterator. Can be consumed by an ``await`` expression in a coroutine. A coroutine waiting for a Future-like object is suspended until the Future-like object's ``__await__`` completes, and returns the result. See `Await Expression`_ for details. :Awaitable: A *Future-like* object or a *coroutine object*. See `Await Expression`_ for details. :Generator-based coroutine: Coroutines based in generator syntax. Most common example is ``@asyncio.coroutine``. :Asynchronous context manager: An asynchronous context manager has ``__aenter__`` and ``__aexit__`` methods and can be used with ``async with``. See `Asynchronous Context Managers and "async with"`_ for details. :Asynchronous iterable: An object with an ``__aiter__`` method, which must return an *asynchronous iterator* object. Can be used with ``async for``. See `Asynchronous Iterators and "async for"`_ for details. :Asynchronous iterator: An asynchronous iterator has an ``__anext__`` method. See `Asynchronous Iterators and "async for"`_ for details. List of functions and methods ============================= ================= =================================== ================= Method Can contain Can't contain ================= =================================== ================= async def func await, return value yield, yield from async def __a*__ await, return value yield, yield from def __a*__ return awaitable await def __await__ yield, yield from, return iterable await generator yield, yield from, return value await ================= =================================== ================= Where: * "async def func": coroutine; * "async def __a*__": ``__aiter__``, ``__anext__``, ``__aenter__``, ``__aexit__`` defined with the ``async`` keyword; * "def __a*__": ``__aiter__``, ``__anext__``, ``__aenter__``, ``__aexit__`` defined without the ``async`` keyword, must return an *awaitable*; * "def __await__": ``__await__`` method to implement *Future-like* objects; * generator: a "regular" generator, function defined with ``def`` and which contains a least one ``yield`` or ``yield from`` expression. Transition Plan =============== To avoid backwards compatibility issues with ``async`` and ``await`` keywords, it was decided to modify ``tokenizer.c`` in such a way, that it: * recognizes ``async def`` name tokens combination (start of a coroutine); * keeps track of regular functions and coroutines; * replaces ``'async'`` token with ``ASYNC`` and ``'await'`` token with ``AWAIT`` when in the process of yielding tokens for coroutines. This approach allows for seamless combination of new syntax features (all of them available only in ``async`` functions) with any existing code. An example of having "async def" and "async" attribute in one piece of code:: class Spam: async = 42 async def ham(): print(getattr(Spam, 'async')) # The coroutine can be executed and will print '42' Backwards Compatibility ----------------------- This proposal preserves 100% backwards compatibility. Grammar Updates --------------- Grammar changes are also fairly minimal:: decorated: decorators (classdef | funcdef | async_funcdef) async_funcdef: ASYNC funcdef compound_stmt: (if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt) async_stmt: ASYNC (funcdef | with_stmt | for_stmt) power: atom_expr ['**' factor] atom_expr: [AWAIT] atom trailer* Transition Period Shortcomings ------------------------------ There is just one. Until ``async`` and ``await`` are not proper keywords, it is not possible (or at least very hard) to fix ``tokenizer.c`` to recognize them on the **same line** with ``def`` keyword:: # async and await will always be parsed as variables async def outer(): # 1 def nested(a=(await fut)): pass async def foo(): return (await fut) # 2 Since ``await`` and ``async`` in such cases are parsed as ``NAME`` tokens, a ``SyntaxError`` will be raised. To workaround these issues, the above examples can be easily rewritten to a more readable form:: async def outer(): # 1 a_default = await fut def nested(a=a_default): pass async def foo(): # 2 return (await fut) This limitation will go away as soon as ``async`` and ``await`` ate proper keywords. Or if it's decided to use a future import for this PEP. Deprecation Plans ----------------- ``async`` and ``await`` names will be softly deprecated in CPython 3.5 and 3.6. In 3.7 we will transform them to proper keywords. Making ``async`` and ``await`` proper keywords before 3.7 might make it harder for people to port their code to Python 3. asyncio ------- ``asyncio`` module was adapted and tested to work with coroutines and new statements. Backwards compatibility is 100% preserved. The required changes are mainly: 1. Modify ``@asyncio.coroutine`` decorator to use new ``types.coroutine()`` function. 2. Add ``__await__ = __iter__`` line to ``asyncio.Future`` class. 3. Add ``ensure_task()`` as an alias for ``async()`` function. Deprecate ``async()`` function. Design Considerations ===================== PEP 3152 -------- PEP 3152 by Gregory Ewing proposes a different mechanism for coroutines (called "cofunctions"). Some key points: 1. A new keyword ``codef`` to declare a *cofunction*. *Cofunction* is always a generator, even if there is no ``cocall`` expressions inside it. Maps to ``async def`` in this proposal. 2. A new keyword ``cocall`` to call a *cofunction*. Can only be used inside a *cofunction*. Maps to ``await`` in this proposal (with some differences, see below.) 3. It is not possible to call a *cofunction* without a ``cocall`` keyword. 4. ``cocall`` grammatically requires parentheses after it:: atom: cocall | <existing alternatives for atom> cocall: 'cocall' atom cotrailer* '(' [arglist] ')' cotrailer: '[' subscriptlist ']' | '.' NAME 5. ``cocall f(*args, **kwds)`` is semantically equivalent to ``yield from f.__cocall__(*args, **kwds)``. Differences from this proposal: 1. There is no equivalent of ``__cocall__`` in this PEP, which is called and its result is passed to ``yield from`` in the ``cocall`` expression. ``await`` keyword expects an *awaitable* object, validates the type, and executes ``yield from`` on it. Although, ``__await__`` method is similar to ``__cocall__``, but is only used to define *Future-like* objects. 2. ``await`` is defined in almost the same way as ``yield from`` in the grammar (it is later enforced that ``await`` can only be inside ``async def``). It is possible to simply write ``await future``, whereas ``cocall`` always requires parentheses. 3. To make asyncio work with PEP 3152 it would be required to modify ``@asyncio.coroutine`` decorator to wrap all functions in an object with a ``__cocall__`` method, or to implement ``__cocall__`` on generators. To call *cofunctions* from existing generator-based coroutines it would be required to use ``costart(cofunc, *args, **kwargs)`` built-in. 4. Since it is impossible to call a *cofunction* without a ``cocall`` keyword, it automatically prevents the common mistake of forgetting to use ``yield from`` on generator-based coroutines. This proposal addresses this problem with a different approach, see `Debugging Features`_. 5. A shortcoming of requiring a ``cocall`` keyword to call a coroutine is that if is decided to implement coroutine-generators -- coroutines with ``yield`` or ``async yield`` expressions -- we wouldn't need a ``cocall`` keyword to call them. So we'll end up having ``__cocall__`` and no ``__call__`` for regular coroutines, and having ``__call__`` and no ``__cocall__`` for coroutine- generators. 6. Requiring parentheses grammatically also introduces a whole lot of new problems. The following code:: await fut await function_returning_future() await asyncio.gather(coro1(arg1, arg2), coro2(arg1, arg2)) would look like:: cocall fut() # or cocall costart(fut) cocall (function_returning_future())() cocall asyncio.gather(costart(coro1, arg1, arg2), costart(coro2, arg1, arg2)) 7. There are no equivalents of ``async for`` and ``async with`` in PEP 3152. Coroutine-generators -------------------- With ``async for`` keyword it is desirable to have a concept of a *coroutine-generator* -- a coroutine with ``yield`` and ``yield from`` expressions. To avoid any ambiguity with regular generators, we would likely require to have an ``async`` keyword before ``yield``, and ``async yield from`` would raise a ``StopAsyncIteration`` exception. While it is possible to implement coroutine-generators, we believe that they are out of scope of this proposal. It is an advanced concept that should be carefully considered and balanced, with a non-trivial changes in the implementation of current generator objects. This is a matter for a separate PEP. No implicit wrapping in Futures ------------------------------- There is a proposal to add similar mechanism to ECMAScript 7 [2]_. A key difference is that JavaScript "async functions" always return a Promise. While this approach has some advantages, it also implies that a new Promise object is created on each "async function" invocation. We could implement a similar functionality in Python, by wrapping all coroutines in a Future object, but this has the following disadvantages: 1. Performance. A new Future object would be instantiated on each coroutine call. Moreover, this makes implementation of ``await`` expressions slower (disabling optimizations of ``yield from``). 2. A new built-in ``Future`` object would need to be added. 3. Coming up with a generic ``Future`` interface that is usable for any use case in any framework is a very hard to solve problem. 4. It is not a feature that is used frequently, when most of the code is coroutines. Why "async" and "await" keywords -------------------------------- async/await is not a new concept in programming languages: * C# has it since long time ago [5]_; * proposal to add async/await in ECMAScript 7 [2]_; see also Traceur project [9]_; * Facebook's Hack/HHVM [6]_; * Google's Dart language [7]_; * Scala [8]_; * proposal to add async/await to C++ [10]_; * and many other less popular languages. This is a huge benefit, as some users already have experience with async/await, and because it makes working with many languages in one project easier (Python with ECMAScript 7 for instance). Why "__aiter__" is a coroutine ------------------------------ In principle, ``__aiter__`` could be a regular function. There are several good reasons to make it a coroutine: * as most of the ``__anext__``, ``__aenter__``, and ``__aexit__`` methods are coroutines, users would often make a mistake defining it as ``async`` anyways; * there might be a need to run some asynchronous operations in ``__aiter__``, for instance to prepare DB queries or do some file operation. Importance of "async" keyword ----------------------------- While it is possible to just implement ``await`` expression and treat all functions with at least one ``await`` as coroutines, this approach makes APIs design, code refactoring and its long time support harder. Let's pretend that Python only has ``await`` keyword:: def useful(): ... await log(...) ... def important(): await useful() If ``useful()`` function is refactored and someone removes all ``await`` expressions from it, it would become a regular python function, and all code that depends on it, including ``important()`` would be broken. To mitigate this issue a decorator similar to ``@asyncio.coroutine`` has to be introduced. Why "async def" --------------- For some people bare ``async name(): pass`` syntax might look more appealing than ``async def name(): pass``. It is certainly easier to type. But on the other hand, it breaks the symmetry between ``async def``, ``async with`` and ``async for``, where ``async`` is a modifier, stating that the statement is asynchronous. It is also more consistent with the existing grammar. Why "async for/with" instead of "await for/with" ------------------------------------------------ ``async`` is an adjective, and hence it is a better choice for a *statement qualifier* keyword. ``await for/with`` would imply that something is awaiting for a completion of a ``for`` or ``with`` statement. Why "async def" and not "def async" ----------------------------------- ``async`` keyword is a *statement qualifier*. A good analogy to it are "static", "public", "unsafe" keywords from other languages. "async for" is an asynchronous "for" statement, "async with" is an asynchronous "with" statement, "async def" is an asynchronous function. Having "async" after the main statement keyword might introduce some confusion, like "for async item in iterator" can be read as "for each asynchronous item in iterator". Having ``async`` keyword before ``def``, ``with`` and ``for`` also makes the language grammar simpler. And "async def" better separates coroutines from regular functions visually. Why not a __future__ import --------------------------- ``__future__`` imports are inconvenient and easy to forget to add. Also, they are enabled for the whole source file. Consider that there is a big project with a popular module named "async.py". With future imports it is required to either import it using ``__import__()`` or ``importlib.import_module()`` calls, or to rename the module. The proposed approach makes it possible to continue using old code and modules without a hassle, while coming up with a migration plan for future python versions. Why magic methods start with "a" -------------------------------- New asynchronous magic methods ``__aiter__``, ``__anext__``, ``__aenter__``, and ``__aexit__`` all start with the same prefix "a". An alternative proposal is to use "async" prefix, so that ``__aiter__`` becomes ``__async_iter__``. However, to align new magic methods with the existing ones, such as ``__radd__`` and ``__iadd__`` it was decided to use a shorter version. Why not reuse existing magic names ---------------------------------- An alternative idea about new asynchronous iterators and context managers was to reuse existing magic methods, by adding an ``async`` keyword to their declarations:: class CM: async def __enter__(self): # instead of __aenter__ ... This approach has the following downsides: * it would not be possible to create an object that works in both ``with`` and ``async with`` statements; * it would break backwards compatibility, as nothing prohibits from returning a Future-like objects from ``__enter__`` and/or ``__exit__`` in Python <= 3.4; * one of the main points of this proposal is to make coroutines as simple and foolproof as possible, hence the clear separation of the protocols. Why not reuse existing "for" and "with" statements -------------------------------------------------- The vision behind existing generator-based coroutines and this proposal is to make it easy for users to see where the code might be suspended. Making existing "for" and "with" statements to recognize asynchronous iterators and context managers will inevitably create implicit suspend points, making it harder to reason about the code. Comprehensions -------------- For the sake of restricting the broadness of this PEP there is no new syntax for asynchronous comprehensions. This should be considered in a separate PEP, if there is a strong demand for this feature. Async lambdas ------------- Lambda coroutines are not part of this proposal. In this proposal they would look like ``async lambda(parameters): expression``. Unless there is a strong demand to have them as part of this proposal, it is recommended to consider them later in a separate PEP. Performance =========== Overall Impact -------------- This proposal introduces no observable performance impact. Here is an output of python's official set of benchmarks [4]_: :: python perf.py -r -b default ../cpython/python.exe ../cpython-aw/python.exe [skipped] Report on Darwin ysmac 14.3.0 Darwin Kernel Version 14.3.0: Mon Mar 23 11:59:05 PDT 2015; root:xnu-2782.20.48~5/RELEASE_X86_64 x86_64 i386 Total CPU cores: 8 ### etree_iterparse ### Min: 0.365359 -> 0.349168: 1.05x faster Avg: 0.396924 -> 0.379735: 1.05x faster Significant (t=9.71) Stddev: 0.01225 -> 0.01277: 1.0423x larger The following not significant results are hidden, use -v to show them: django_v2, 2to3, etree_generate, etree_parse, etree_process, fastpickle, fastunpickle, json_dump_v2, json_load, nbody, regex_v8, tornado_http. Tokenizer modifications ----------------------- There is no observable slowdown of parsing python files with the modified tokenizer: parsing of one 12Mb file (``Lib/test/test_binop.py`` repeated 1000 times) takes the same amount of time. async/await ----------- The following micro-benchmark was used to determine performance difference between "async" functions and generators:: import sys import time def binary(n): if n <= 0: return 1 l = yield from binary(n - 1) r = yield from binary(n - 1) return l + 1 + r async def abinary(n): if n <= 0: return 1 l = await abinary(n - 1) r = await abinary(n - 1) return l + 1 + r def timeit(gen, depth, repeat): t0 = time.time() for _ in range(repeat): list(gen(depth)) t1 = time.time() print('{}({}) * {}: total {:.3f}s'.format( gen.__name__, depth, repeat, t1-t0)) The result is that there is no observable performance difference. Minimum timing of 3 runs :: abinary(19) * 30: total 12.985s binary(19) * 30: total 12.953s Note that depth of 19 means 1,048,575 calls. Reference Implementation ======================== The reference implementation can be found here: [3]_. List of high-level changes and new protocols -------------------------------------------- 1. New syntax for defining coroutines: ``async def`` and new ``await`` keyword. 2. New ``__await__`` method for Future-like objects, and new ``tp_await`` slot in ``PyTypeObject``. 3. New syntax for asynchronous context managers: ``async with``. And associated protocol with ``__aenter__`` and ``__aexit__`` methods. 4. New syntax for asynchronous iteration: ``async for``. And associated protocol with ``__aiter__``, ``__aexit__`` and new built- in exception ``StopAsyncIteration``. 5. New AST nodes: ``AsyncFunctionDef``, ``AsyncFor``, ``AsyncWith``, ``Await``. 6. New functions: ``sys.set_coroutine_wrapper(callback)``, ``sys.get_coroutine_wrapper()``, ``types.coroutine(gen)``, ``inspect.iscoroutinefunction()``, and ``inspect.iscoroutine()``. 7. New ``CO_COROUTINE`` bit flag for code objects. While the list of changes and new things is not short, it is important to understand, that most users will not use these features directly. It is intended to be used in frameworks and libraries to provide users with convenient to use and unambiguous APIs with ``async def``, ``await``, ``async for`` and ``async with`` syntax. Working example --------------- All concepts proposed in this PEP are implemented [3]_ and can be tested. :: import asyncio async def echo_server(): print('Serving on localhost:8000') await asyncio.start_server(handle_connection, 'localhost', 8000) async def handle_connection(reader, writer): print('New connection...') while True: data = await reader.read(8192) if not data: break print('Sending {:.10}... back'.format(repr(data))) writer.write(data) loop = asyncio.get_event_loop() loop.run_until_complete(echo_server()) try: loop.run_forever() finally: loop.close() References ========== .. [1] https://docs.python.org/3/library/asyncio-task.html#asyncio.coroutine .. [2] http://wiki.ecmascript.org/doku.php?id=strawman:async_functions .. [3] https://github.com/1st1/cpython/tree/await .. [4] https://hg.python.org/benchmarks .. [5] https://msdn.microsoft.com/en-us/library/hh191443.aspx .. [6] http://docs.hhvm.com/manual/en/hack.async.php .. [7] https://www.dartlang.org/articles/await-async/ .. [8] http://docs.scala-lang.org/sips/pending/async.html .. [9] https://github.com/google/traceur-compiler/wiki/LanguageFeatures#async-funct... .. [10] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3722.pdf (PDF) Acknowledgments =============== I thank Guido van Rossum, Victor Stinner, Elvis Pranskevichus, Andrew Svetlov, and Łukasz Langa for their initial feedback. Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End:

Yury Selivanov schrieb am 28.04.2015 um 05:07:
e) Should we add a coroutine ABC (for cython etc)?
Sounds like the right thing to do, yes. IIUC, a Coroutine would be a new stand-alone ABC with send, throw and close methods. Should a Generator then inherit from both Iterator and Coroutine, or would that counter your intention to separate coroutines from generators as a concept? I mean, they do share the same interface ... It seems you're already aware of https://bugs.python.org/issue24018 Stefan

On 28 Apr 2015, at 5:07, Yury Selivanov wrote:
Does this mean it's not possible to implement an async version of os.walk() if we had an async version of os.listdir()? I.e. for async code we're back to implementing iterators "by hand" instead of using generators for it.
[...]
Servus, Walter

Inline comments below... On Mon, Apr 27, 2015 at 8:07 PM, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
check for a generalized awaitable rather than specifically a coroutine.
these methods at all but given the implementation tactic for coroutines that may not be possible, so the nearest approximation is TypeError. (Also, NotImplementedError is typically to indicate that a subclass should implement it.)
Sounds like Stefan agrees. Are you aware of http://bugs.python.org/issue24018 (Generator ABC)?
I also hope that if someone has their own (renamed) copy of asyncio that works with 3.4, it will all still work with 3.5. Even if asyncio itself is provisional, none of the primitives (e.g. yield from) that it is built upon are provisional, so there should be no reason for it to break in 3.5.
I wonder if you could add some adaptation of the explanation I have posted (a few times now, I feel) for the reason why I prefer to suspend only at syntactically recognizable points (yield [from] in the past, await and async for/with in this PEP). Unless you already have it in the rationale (though it seems Mark didn't think it was enough :-).
Implementation changes don't need to go through the PEP process, unless they're really also interface changes.
Despite reading this I still get confused when reading the PEP (probably because asyncio uses "coroutine" in the latter sense). Maybe it would make sense to write "native coroutine" for the new concept, to distinguish the two concepts more clearly? (You could even change "awaitable" to "coroutine". Though I like "awaitable" too.)
``yield`` are disallowed syntactically outside functions (as are the syntactic constraints on ``await` and ``async for|def``). place. placed in an 'else' clause on the inner try? (There may well be a reason but I can't figure what it is, and PEP 343 doesn't seem to explain it.) Also, it's a shame we're perpetuating the sys.exc_info() triple in the API here, but I agree making __exit__ and __aexit__ different also isn't a great idea. :-( PS. With the new tighter syntax for ``await`` you don't need the ``exit_res`` variable any more.
(Also, the implementation of the latter is problematic -- check asyncio/locks.py and notice that __enter__ is empty...)
Have you considered making __aiter__ not an awaitable? It's not strictly necessary I think, one could do all the awaiting in __anext__. Though perhaps there are use cases that are more naturally expressed by awaiting in __aiter__? (Your examples all use ``async def __aiter__(self): return self`` suggesting this would be no great loss.)
it's about stopping an async iteration, but in my head I keep referring to it as AsyncStopIteration, probably because in other places we use async (or 'a') as a prefix.
How does ``yield from`` know that it is occurring in a generator-based coroutine?
3. ``yield from`` does not accept coroutine objects from plain Python generators (*not* generator-based coroutines.)
I am worried about this. PEP 380 gives clear semantics to "yield from <generator>" and I don't think you can revert that here. Or maybe I am misunderstanding what you meant here? (What exactly are "coroutine objects from plain Python generators"?)
Does send() make sense for a native coroutine? Check PEP 380. I think the only way to access the send() argument is by using ``yield`` but that's disallowed. Or is this about send() being passed to the ``yield`` that ultimately suspends the chain of coroutines? (You may just have to rewrite the section about that -- it seems a bit hidden now.)
(I'm not sure of the utility of this section.)
(This line seems redundant.)
def __a*__ return awaitable await
def __await__ yield, yield from, return iterable await
Is this still true with the proposed restrictions on what ``yield from`` accepts? (Hopefully I'm the one who is confused. :-)
-- --Guido van Rossum (python.org/~guido)

Hi Guido, Thank you for a very detailed review. Comments below: On 2015-04-28 5:49 PM, Guido van Rossum wrote:
My main question here is it OK to reuse 'tp_reserved' (former tp_compare)? I had to remove this check: https://github.com/1st1/cpython/commit/4be6d0a77688b63b917ad88f09d446ac3b7e2... On the other hand I think that it's a slightly better solution than adding a new slot.
Great! The current grammar requires parentheses for consequent await expressions: await (await coro()) I can change this (in theory), but I kind of like the parens in this case -- better readability. And it'll be a very rare case.
It's important to at least have 'iscoroutine' -- to check that the object is a coroutine function. A typical use-case would be a web framework that lets you to bind coroutines to specific http methods/paths: @http.get('/spam') async def handle_spam(request): ... 'http.get' decorator will need a way to raise an error if it's applied to a regular function (while the code is being imported, not in runtime). The idea here is to cover all kinds of python objects in inspect module, it's Python's reflection API. The other thing is that it's easy to implement this function for CPython: just check for CO_COROUTINE flag. For other Python implementations it might be a different story. (More arguments for isawaitable() below)
Agree.
I'll experiment with replacing (c) with a warning. We can disable __iter__ and __next__ for coroutines, but allow to use 'yield from' on them. Would it be a better approach?
(d) can also break something (hypothetically). I'm not sure why would someone use isgenerator() and isgeneratorfunction() on generator-based coroutines in code based on asyncio, but there is a chance that someone did (it should be trivial to fix the code). Same for iter() and next(). The chance is slim, but we may break some obscure code. Are you OK with this?
Yes, I saw the issue. I'll review it in more detail before thinking about Coroutine ABC for the next PEP update.
I agree. I'll try warnings for yield-fromming coroutines from regular generators (so that we can disable it in 3.7/3.6). *If that doesn't work*, I think we need a compromise (not ideal, but breaking things is worse): - yield from would always accept coroutine-objects - iter(), next(), tuple(), etc won't work on coroutine-objects - for..in won't work on coroutine-objects
I'll see what I can do.
"awaitable" is a more generic term... It can be a future, or it can be a coroutine. Mixing them in one may create more confusion. Also, "awaitable" is more of an interface, or a trait, which means that the object won't be rejected by the 'await' expression. I like your 'native coroutine' suggestion. I'll update the PEP.
OK
Good catch.
Yes, this can be simplified. It was indeed copied from PEP 343.
There is a section in Design Considerations about this. I should add a reference to it.
I'd be totally OK with that. Should I rename it?
I think it's a mistake that a lot of beginners may make at some point (and in this sense it's frequent). I really doubt that once you were hit by it more than two times you would make it again. This is a small wart, but we have to have a solution for it.
Will add a subsection specifically for them.
I think that isawaitable would be really useful. Especially, to check if an object implemented with C API has a tp_await function. isawaitablefunction() looks a bit confusing to me: def foo(): return fut is awaitable, but there is no way to detect that. def foo(arg): if arg == 'spam': return fut is awaitable sometimes.
I check that in 'ceval.c' in the implementation of YIELD_FROM opcode. If the current code object doesn't have a CO_COROUTINE flag and the opcode arg is a generator-object with CO_COROUTINE -- we raise an error.
# *Not* decorated with @coroutine def some_algorithm_impl(): yield 1 yield from native_coroutine() # <- this is a bug "some_algorithm_impl" is a regular generator. By mistake someone could try to use "yield from" on a native coroutine (which is 99.9% is a bug). So we can rephrase it to: ``yield from`` does not accept *native coroutine objects* from regular Python generators I also agree that raising an exception in this case in 3.5 might break too much existing code. I'll try warnings, and if it doesn't work we might want to just let this restriction slip.
Yes, 'send()' is needed to push values to the 'yield' statement somewhere (future) down the chain of coroutines (suspension point). This has to be articulated in a clear way, I'll think how to rewrite this section without replicating PEP 380 and python documentation on generators.
It's a little bit hard to understand that "awaitable" is a general term that includes native coroutine objects, so it's OK to write both: def __aenter__(): return fut async def __aenter__(): ... We (Victor and I) decided that it might be useful to have an additional section that explains it.
True for the code that uses @coroutine decorators properly. I'll see what I can do with warnings, but I'll update the section anyways.
Are you OK with this thing?

On Tue, Apr 28, 2015 at 4:55 PM, Ethan Furman <ethan@stoneleaf.us> wrote:
You could at least provide an explanation about how the current proposal falls short. What code will break? There's a cost to __future__ imports too. The current proposal is a pretty clever hack -- and we've done similar hacks in the past (last I remember when "import ... as ..." was introduced but we didn't want to make 'as' a keyword right away). -- --Guido van Rossum (python.org/~guido)

Guido van Rossum wrote:
There's a benefit to having a __future__ import beyond avoiding hackery: by turning on the __future__ you can find out what will break when they become real keywords. But I suppose that could be achieved by having both the hack *and* the __future__ import available. -- Greg

Guido, I found a solution how to disable 'yield from', iter()/tuple() and 'for..in' on native coroutines with 100% backwards compatibility. The idea is to add one more code object flag: CO_NATIVE_COROUTINE, which will be applied, along with CO_COROUTINE to all 'async def' functions. This way: 1. old generator-based coroutines from asyncio are awaitable, because of CO_COROUTINE flag (that asyncio.coroutine decorator will set with 'types.coroutine'). 2. new 'async def' functions are awaitable because of CO_COROUTINE flag. 3. GenObject __iter__ and __next__ raise error *only* if it has CO_NATIVE_COROUTINE flag. So iter(), next(), for..in aren't supported only for 'async def' functions (but will work ok on asyncio generator-based coroutines) 4. 'yield from' *only* raises an error if it yields a *coroutine with a CO_NATIVE_COROUTINE* from a regular generator. Thanks, Yury On 2015-04-28 7:26 PM, Yury Selivanov wrote:

Yury Selivanov wrote:
What about new 'async def' code called by existing code that expects to be able to use iter() or next() on the future objects it receives?
Won't that prevent some existing generator-based coroutines (ones not decorated with @coroutine) from calling ones implemented with 'async def'? -- Greg

On 2015-04-29 5:13 AM, Greg Ewing wrote:
It would. But that's not a backwards compatibility issue. Everything will work in 3.5 without a single line change. If you want to use new coroutines - use them, everything will work too. If, however, during the refactoring you've missed several generator-based coroutines *and* they are not decorated with @coroutine - then yes, you will get a runtime error. I see absolutely no problem with that. It's a small price to pay for a better design. Yury

Yury Selivanov wrote:
It seems to go against Guido's desire for the new way to be a 100% drop-in replacement for the old way. There are various ways that old code can end up calling new code -- subclassing, callbacks, etc. It also means that if person A writes a library in the new style, then person B can't make use of it without upgrading all of their code to the new style as well. The new style will thus be "infectious" in a sense. I suppose it's up to Guido to decide whether it's a good or bad infection. But the same kind of reasoning seemed to be at least partly behind the rejection of PEP 3152. -- Greg

Greg, On 2015-04-29 6:46 PM, Greg Ewing wrote:
It's a drop-in replacement ;) If you run your existing code - it will 100% work just fine. There is a probability that *when* you start applying new syntax something could go wrong -- you're right here. I'm updating the PEP to explain this clearly, and let's see what Guido thinks about that. My opinion is that this is a solvable problem with a clear guidelines on how to transition existing code to the new style. Thanks, Yury

Yury Selivanov wrote:
But isn't that too restrictive? Any function that returns an awaitable object would work in the above case.
What about when you change an existing non-suspendable function to make it suspendable, and have to deal with the ripple-on effects of that? Seems to me that affects everyone, not just beginners.
So what you really mean is "yield-from, when used inside a function that doesn't have @coroutine applied to it, will not accept a coroutine object", is that right? If so, I think this part needs re-wording, because it sounded like you meant something quite different. I'm not sure I like this -- it seems weird that applying a decorator to a function should affect the semantics of something *inside* the function -- especially a piece of built-in syntax such as 'yield from'. It's similar to the idea of replacing 'async def' with a decorator, which you say you're against. BTW, by "coroutine object", do you mean only objects returned by an async def function, or any object having an __await__ method? I think a lot of things would be clearer if we could replace the term "coroutine object" with "awaitable object" everywhere.
``yield from`` does not accept *native coroutine objects* from regular Python generators
It's the "from" there that's confusing -- it sounds like you're talking about where the argument to yield-from comes from, rather than where the yield-from expression resides. In other words, we though you were proposing to disallow *this*: # *Not* decorated with @coroutine def some_algorithm_impl(): yield 1 yield from iterator_implemented_by_generator() I hope to agree that this is a perfectly legitimate thing to do, and should remain so? -- Greg

Greg, On 2015-04-29 5:12 AM, Greg Ewing wrote:
It's just an example. All in all, I think that we should have full coverage of python objects in the inspect module. There are many possible use cases besides the one that I used -- runtime introspection, reflection, debugging etc, where you might need them.
I've been using coroutines on a daily basis for 6 or 7 years now, long before asyncio we had a coroutine-based framework at my firm (yield + trampoline). Neither I nor my colleagues had any problems with refactoring the code. I really try to speak from my experience when I say that it's not that big of a problem. Anyways, the PEP provides set_coroutine_wrapper which should solve the problem.
This is for the transition period. We don't want to break existing asyncio code. But we do want coroutines to be a separate concept from generators. It doesn't make any sense to iterate through coroutines or to yield-from them. We can deprecate @coroutine decorator in 3.6 or 3.7 and at some time remove it.
The PEP clearly separates awaitbale from coroutine objects. - coroutine object is returned from coroutine call. - awaitable is either a coroutine object or an object with __await__. list(), tuple(), iter(), next(), for..in etc. won't work on objects with __await__ (unless they implement __iter__). The problem I was discussing is about specifically 'yield from' and 'coroutine object'.
Sure it's perfectly normal ;) I apologize for the poor wording. Yury

On 29/04/2015 9:49 a.m., Guido van Rossum wrote:
That seems unavoidable if the goal is for 'await' to only work on generators that are intended to implement coroutines, and not on generators that are intended to implement iterators. Because there's no way to tell them apart without marking them in some way. -- Greg

On 2015-04-28 11:59 PM, Greg wrote:
Not sure what you mean by "unavoidable". Before the last revision of the PEP it was perfectly fine to use generators in 'yield from' in generator-based coroutines: @asyncio.coroutine def foo(): yield from gen() and yet you couldn't do the same with 'await' (as it has a special opcode instead of GET_ITER that can validate what you're awaiting). With the new version of the PEP - 'yield from' in foo() would raise a TypeError. If we change it to a RuntimeWarning then we're safe in terms of backwards compatibility. I just want to see how exactly warnings will work (i.e. will they occur multiple times at the same 'yield from' expression, etc) Yury

Yury Selivanov wrote:
Guido is worried about existing asyncio-based code that doesn't always decorate its generators with @coroutine. If I understand correctly, if you have @coroutine def coro1(): yield from coro2() def coro2(): yield from ... then coro1() would no longer work. In other words, some currently legitimate asyncio-based code will break under PEP 492 even if it doesn't use any PEP 492 features. What you seem to be trying to do here is catch the mistake of using a non-coroutine iterator as if it were a coroutine. By "unavoidable" I mean I can't see a way to achieve that in all possible permutations without giving up some backward compatibility. -- Greg

Guido van Rossum wrote:
+1, that seems more consistent to me too.
I think that's a red herring in relation to the reason for StopAsyncIteration/AsyncStopIteration being needed. The real reason is that StopIteration is already being used to signal returning a value from an async function, so it can't also be used to signal the end of an async iteration.
I think we need some actual evidence before we can claim that one of these mistakes is more easily made than the other. A priori, I would tend to assume that failing to use 'await' when it's needed would be the more insidious one. If you mistakenly treat the return value of a function as a future when it isn't one, you will probably find out about it pretty quickly even under the old regime, since most functions don't return iterators. On the other hand, consider refactoring a function that was previously not a coroutine so that it now is. All existing calls to that function now need to be located and have either 'yield from' or 'await' put in front of them. There are three possibilities: 1. The return value is not used. The destruction-before- iterated-over heuristic will catch this (although since it happens in a destructor, you won't get an exception that propagates in the usual way). 2. Some operation is immediately performed on the return value. Most likely this will fail, so you will find out about the problem promptly and get a stack trace, although the error message will be somewhat tangentially related to the cause. 3. The return value is stored away for later use. Some time later, an operation on it will fail, but it will no longer be obvious where the mistake was made. So it's all a bit of a mess, IMO. But maybe it's good enough. We need data. How often have people been bitten by this kind of problem, and how much trouble did it cause them?
That's made me think of something else. Suppose you want to suspend execution in an 'async def' function -- how do you do that if 'yield' is not allowed? You may need something like the suspend() primitive that I was thinking of adding to PEP 3152.
I don't see how this is different from an 'async def' function always returning an awaitable object, or a new awaitable object being created on each 'async def' function invocation. Sounds pretty much isomorphic to me. -- Greg

Greg, On 2015-04-29 5:12 AM, Greg Ewing wrote:
When we start thinking about generator-coroutines (the ones that combine 'await' and 'async yield'-something), we'll have to somehow multiplex them to the existing generator object (at least that's one way to do it). StopIteration is already extremely loaded with different special meanings. [..]
We do this in asyncio with Futures. We never combine 'yield' and 'yield from' in a @coroutine. We don't need 'suspend()'. If you need suspend()-like thing in your own framework, implement an object with an __await__ method and await on it.
Agree. I'll try to reword that section. Thanks, Yury

On Tue Apr 28 23:49:56 CEST 2015, Guido van Rossum quoted PEP 492:
So? PEP 492 never says what coroutines *are* in a way that explains why it matters that they are different from generators. Do you really mean "coroutines that can be suspended while they wait for something slow"? As best I can guess, the difference seems to be that a "normal" generator is using yield primarily to say: "I'm not done; I have more values when you want them", but an asynchronous (PEP492) coroutine is primarily saying: "This might take a while, go ahead and do something else meanwhile."
Does it really permit *making* them, or does it just signal that you will be waiting for them to finish processing anyhow, and it doesn't need to be a busy-wait? As nearly as I can tell, "async with" doesn't start processing the managed block until the "asynchronous" call finishes its work -- the only point of the async is to signal a scheduler that the task is blocked. Similarly, "async for" is still linearized, with each step waiting until the previous "asynchronous" step was not merely launched, but fully processed. If anything, it *prevents* within-task parallelism.
What justifies this limitation? Is there anything wrong awaiting something that eventually uses "return" instead of "yield", if the "this might take a while" signal is still true? Is the problem just that the current implementation might not take proper advantage of task-switching?
What would be wrong if a class just did __await__ = __anext__ ? If the problem is that the result of __await__ should be iterable, then why isn't __await__ = __aiter__ OK?
Does that mean "The ``await`` keyword has slightly higher precedence than ``yield``, so that fewer expressions require parentheses"?
Other than the arbitrary "keyword must be there" limitations imposed by this PEP, how is that different from: class AsyncContextManager: async def __aenter__(self): log('entering context') or even: class AsyncContextManager: def __aenter__(self): log('entering context') Will anything different happen when calling __aenter__ or log? Is it that log itself now has more freedom to let other tasks run in the middle?
Why? Does that just mean they won't take advantage of the freedom you offered them? Or are you concerned that they are more likely to cooperate badly with the scheduler in practice?
The same questions about why -- what is the harm?
Again, I don't see what this buys you except that a scheduler has been signaled that it is OK to pre-empt between rows. That is worth signaling, but I don't see why a regular iterator should be forbidden.
So the decision is made at compile-time, and can't be turned on later? Then what is wrong with just offering an alternative @coroutine that can be used to override the builtin? Or why not just rely on set_coroutine_wrapper entirely, and simply set it to None (so no wasted wrappings) by default? -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

Hi Jim, On 2015-04-29 1:43 PM, Jim J. Jewett wrote:
Correct.
I does.
Right.
It enables cooperative parallelism.
We want to avoid people passing regular generators and random objects to 'await', because it is a bug.
If it's an 'async def' then sure, you can use it in await.
For coroutines in PEP 492: __await__ = __anext__ is the same as __call__ = __next__ __await__ = __aiter__ is the same as __call__ = __iter__
This is OK. The point is that you can use 'await log' in __aenter__. If you don't need awaits in __aenter__ you can use them in __aexit__. If you don't need them there too, then just define a regular context manager.
__aenter__ must return an awaitable.
Not sure I understand the question. It doesn't make any sense in using 'async with' outside of a coroutine. The interpeter won't know what to do with them: you need an event loop for that.
It's not about signaling. It's about allowing cooperative scheduling of long-running processes.
It is set to None by default. Will clarify that in the PEP. Thanks, Yury

On Wed Apr 29 20:06:23 CEST 2015,Yury Selivanov replied:
As best I can guess, the difference seems to be that a "normal" generator is using yield primarily to say:
"I'm not done; I have more values when you want them",
but an asynchronous (PEP492) coroutine is primarily saying:
"This might take a while, go ahead and do something else meanwhile."
Correct.
Then I strongly request a more specific name than coroutine. I would prefer something that refers to cooperative pre-emption, but I haven't thought of anything that is short without leading to other types of confusion. My least bad idea at the moment would be "self-suspending coroutine" to emphasize that suspending themselves is a crucial feature. Even "PEP492-coroutine" would be an improvement.
I does.
Bad phrasing on my part. Is there anything that prevents an asynchronous call (or waiting for one) without the "async with"? If so, I'm missing something important. Either way, I would prefer different wording in the PEP.
What justifies this limitation?
We want to avoid people passing regular generators and random objects to 'await', because it is a bug.
Why? Is it a bug just because you defined it that way? Is it a bug because the "await" makes timing claims that an object not making such a promise probably won't meet? (In other words, a marker interface.) Is it likely to be a symptom of something that wasn't converted correctly, *and* there are likely to be other bugs caused by that same lack of conversion?
For coroutines in PEP 492:
__await__ = __anext__ is the same as __call__ = __next__ __await__ = __aiter__ is the same as __call__ = __iter__
That tells me that it will be OK sometimes, but will usually be either a mistake or an API problem -- and it explains why. Please put those 3 lines in the PEP.
Is it an error to use "async with" on a regular context manager? If so, why? If it is just that doing so could be misleading, then what about "async with mgr1, mgr2, mgr3" -- is it enough that one of the three might suspend itself?
__aenter__ must return an awaitable
Why? Is there a fundamental reason, or it is just to avoid the hassle of figuring out whether or not the returned object is a future that might still need awaiting? Is there an assumption that the scheduler will let the thing-being awaited run immediately, but look for other tasks when it returns, and a further assumption that something which finishes the whole task would be too slow to run right away?
So does the PEP also provide some way of ensuring that there is an event loop? Does it assume that self-suspending coroutines will only ever be called by an already-running event loop compatible with asyncio.get_event_loop()? If so, please make these contextual assumptions explicit near the beginning of the PEP.
The same questions about why -- what is the harm?
I can imagine that as an implementation detail, the async for wouldn't be taken advtange of unless it was running under an event loop that knew to look for "aync for" as suspension points. I'm not seeing what the actual harm is in either not happening to suspend (less efficient, but still correct), or in suspending between every step of a regular iterator (because, why not?)
(1) How does this differ from the existing asynchio.coroutine? (2) Why does it need to have an environment variable? (Sadly, the answer may be "backwards compatibility", if you're really just specifying the existing asynchio interface better.) (3) Why does it need [set]get_coroutine_wrapper, instead of just setting the asynchio.coroutines.coroutine attribute? (4) Why do the get/set need to be in sys? Is the intent to do anything more than preface execution with: import asynchio.coroutines asynchio.coroutines._DEBUG = True -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

Jim, On 2015-04-30 2:41 PM, Jim J. Jewett wrote: [...]
Yes, you can't use 'yield from' in __exit__/__enter__ in current Python.
Same as 'yield from' is expecting an iterable, await is expecting an awaitable. That's the protocol. You can't pass random objects to 'with' statements, 'yield from', 'for..in', etc. If you write def gen(): yield 1 await gen() then it's a bug.
There is a line like that: https://www.python.org/dev/peps/pep-0492/#await-expression Look for "Also, please note..." line.
'with' requires an object with __enter__ and __exit__ 'async with' requires an object with __aenter__ and __aexit__ You can have an object that implements both interfaces.
The fundamental reason why 'async with' is proposed is because you can't suspend execution in __enter__ and __exit__. If you need to suspend it there, use 'async with' and its __a*__ methods, but they have to return awaitable (see https://www.python.org/dev/peps/pep-0492/#new-syntax and look what 'async with' is semantically equivalent to)
You need some kind of loop, but it doesn't have to the one from asyncio. There is at least one place in the PEP where it's mentioned that the PEP introduses a generic concept that can be used by asyncio *and* other frameworks.
Event loop doesn't need to know anything about 'async with' and 'async for'. For loop it's always one thing -- something is awaiting somewhere for some result.
That section describes some hassles we had in asyncio to enable better debugging. (3) because it allows to enable debug selectively when we need it (4) because it's where functions like 'set_trace' live. set_coroutine_wrapper() also requires some modifications in the eval loop, so sys looks like the right place.
This won't work, unfortunately. You need to set the debug flag *before* you import asyncio package (otherwise we would have an unavoidable performance cost for debug features). If you enable it after you import asyncio, then asyncio itself won't be instrumented. Please see the implementation of asyncio.coroutine for details. set_coroutine_wrapper solves these problems. Yury

On Thu Apr 30 21:27:09 CEST 2015, Yury Selivanov replied: On 2015-04-30 2:41 PM, Jim J. Jewett wrote:
Bad phrasing on my part. Is there anything that prevents an asynchronous call (or waiting for one) without the "async with"?
If so, I'm missing something important. Either way, I would prefer different wording in the PEP.
Yes, you can't use 'yield from' in __exit__/__enter__ in current Python.
I tried it in 3.4, and it worked. I'm not sure it would ever be sensible, but it didn't raise any errors, and it did run. What do you mean by "can't use"?
That tells me that it will be OK sometimes, but will usually be either a mistake or an API problem -- and it explains why.
Please put those 3 lines in the PEP.
It was from reading the PEP that the question came up, and I just reread that section. Having those 3 explicit lines goes a long way towards explaining how an asychio coroutine differs from a regular callable, in a way that the existing PEP doesn't, at least for me.
'with' requires an object with __enter__ and __exit__
'async with' requires an object with __aenter__ and __aexit__
You can have an object that implements both interfaces.
I'm not still not seeing why with (let alone await with) can't just run whichever one it finds. "await with" won't actually let the BLOCK run until the future is resolved. So if a context manager only supplies __enter__ instead of __aenter__, then at most you've lost a chance to switch tasks while waiting -- and that is no worse than if the context manager just happened to be really slow.
For debugging this kind of mistakes there is a special debug mode in
Is the intent to do anything more than preface execution with:
import asynchio.coroutines asynchio.coroutines._DEBUG = True
Why does asynchio itself have to wrapped? Is that really something normal developers need to debug, or is it only for developing the stdlib itself? If it if only for developing the stdlib, than I would rather see workarounds like shoving _DEBUG into builtins when needed, as opposed to adding multiple attributes to sys. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

On 2015-05-01 5:37 PM, Jim J. Jewett wrote:
It probably executed without errors, but it didn't run the generators. class Foo: def __enter__(self): yield from asyncio.sleep(0) print('spam') with Foo(): pass # <- 'spam' won't ever be printed.
let's say you have a function: def foo(): with Ctx(): pass if Ctx.__enter__ is a generator/coroutine, then foo becomes a generator/coroutine (otherwise how (and to what) would you yield from/await on __enter__?). And then suddenly calling 'foo' doesn't do anything (it will return you a generator/coroutine object). This isn't transparent or even remotely understandable.
Yes, normal developers need asyncio to be instrumented, otherwise you won't know what you did wrong when you called some asyncio code without 'await' for example. Yury

On Fri May 1 23:58:26 CEST 2015, Yury Selivanov wrote:
Yes, you can't use 'yield from' in __exit__/__enter__ in current Python.
What do you mean by "can't use"?
It probably executed without errors, but it didn't run the generators.
True. But it did return the one created by __enter__, so it could be bound to a variable and iterated within the block. There isn't an easy way to run the generator created by __exit__, and I'm not coming up with any obvious scenarios where it would be a sensible thing to do (other than using "with" on a context manager that *does* return a future instead of finishing). That said, I'm still not seeing why the distinction is so important that we have to enforce it at a language level, as opposed to letting the framework do its own enforcement. (And if the reason is performance, then make the checks something that can be turned off, or offer a fully instrumented loop as an alternative for debugging.)
If you enable it after you import asyncio, then asyncio itself won't be instrumented.
I'll trust you that it *does* work that way, but this sure sounds to me as though the framework isn't ready to be frozen with syntax, and maybe not even ready for non-provisional stdlib inclusion. I understand that the disconnected nature of asynchronous tasks makes them harder to debug. I heartily agree that the event loop should offer some sort of debug facility to track this. But the event loop is supposed to be pluggable. Saying that this requires not merely a replacement, or even a replacement before events are added, but a replacement made before python ever even loads the default version ... That seems to be much stronger than sys.settrace -- more like instrumenting the ceval loop itself. And that is something that ordinary developers shouldn't have to do. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

On Thu, Apr 30, 2015 at 11:41 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
This seems so vague as to be useless to me. When using generators to implement iterators, "yield" very specifically means "here is the next value in the sequence I'm generating". (And to indicate there are no more values you have to use "return".)
Actually that's not even wrong. When using generators as coroutines, PEP 342 style, "yield" means "I am blocked waiting for a result that the I/O multiplexer is eventually going to produce". The argument to yield tells the multiplexer what the coroutine is waiting for, and it puts the generator stack frame on an appropriate queue. When the multiplexer has obtained the requested result it resumes the coroutine by using send() with that value, which resumes the coroutine/generator frame, making that value the return value from yield. Read Greg Ewing's tutorial for more color: http://www.cosc.canterbury.ac.nz/greg.ewing/python/yield-from/yield_from.htm... Then I strongly request a more specific name than coroutine.
No, this is the name we've been using since PEP 342 and it's still the same concept. -- --Guido van Rossum (python.org/~guido)

On Thu, 30 Apr 2015 12:32:02 -0700 Guido van Rossum <guido@python.org> wrote:
No, this is the name we've been using since PEP 342 and it's still the same concept.
The fact that all syntax uses the word "async" and not "coro" or "coroutine" hints that it should really *not* be called a coroutine (much less a "native coroutine", which both silly and a lie). Why not "async function"? Regards Antoine.

It is spelled "Raymond Luxury-Yacht", but it's pronounced "Throatwobbler Mangrove". :-) I am actually fine with calling a function defined with "async def ..." an async function, just as we call a function containing "yield" a generator function. However I prefer to still use "coroutine" to describe the concept implemented by async functions. *Some* generator functions also implement coroutines; however I would like to start a movement where eventually we'll always be using async functions when coroutines are called for, dedicating generators once again to their pre-PEP-342 role of a particularly efficient way to implement iterators. Note that I'm glossing over the distinction between yield and yield-from here; both can be used to implement the coroutine pattern, but the latter has some advantages when the pattern is used to support an event loop: most importantly, when using yield-from-style coroutines, a coroutine can use return to pass a value directly to the stack frame that is waiting for its result. Prior to PEP 380 (yield from), the trampoline would have to be involved in this step, and there was no standard convention for how to communicate the final result to the trampoline; I've seen "returnValue(x)" (Twisted inlineCallbacks), "raise ReturnValue(x)" (Google App Engine NDB), "yield Return(x)" (Monocle) and I believe I've seen plain "yield x" too (the latter two being abominations in my mind, since it's unclear whether the generator is resumed after s value-returning yield). While yield-from was an improvement over plain yield, await is an improvement over yield-from. As with most changes to Python (as well as natural evolution), an improvement often leads the way to another improvement -- one that wasn't obvious before. And that's fine. If I had laid awake worrying about the best way to spell async functions while designing asyncio, PEP 3156 probably still wouldn't have been finished today. On Thu, Apr 30, 2015 at 12:40 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
-- --Guido van Rossum (python.org/~guido)

On 30 April 2015 at 20:32, Guido van Rossum <guido@python.org> wrote:
However, it is (as I noted in my other email) not very well documented. There isn't a glossary entry in the docs for "coroutine", and there's nothing pointing out that coroutines need (for anything other than toy cases) an event loop, trampoline, or IO multiplexer (call it what you want, although I prefer terms that don't make it sound like it's exclusively about IO). I'll raise an issue on the tracker for this, and I'll see if I can write up something. Once there's a non-expert's view in the docs, the experts can clarify the technicalities if I get them wrong :-) I propose a section under https://docs.python.org/3/reference/expressions.html#yield-expressions describing coroutines, and their usage. Paul

On Thu, Apr 30, 2015 at 3:32 PM, Guido van Rossum <guido@python.org> wrote: (me:)
but an asynchronous (PEP492) coroutine is primarily saying:
"This might take a while, go ahead and do something else meanwhile."
(Yuri:) Correct. (Guido:)> Actually that's not even wrong. When using generators as coroutines, PEP 342
style, "yield" means "I am blocked waiting for a result that the I/O multiplexer is eventually going to produce".
So does this mean that yield should NOT be used just to yield control if a task isn't blocked? (e.g., if its next step is likely to be long, or low priority.) Or even that it wouldn't be considered a co-routine in the python sense? If this is really just about avoiding busy-wait on network IO, then coroutine is way too broad a term, and I'm uncomfortable restricting a new keyword (async or await) to what is essentially a Domain Specific Language. -jJ

On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
I'm not sure what you're talking about. Does "next step" refer to something in the current stack frame or something that you're calling? None of the current uses of "yield" (the keyword) in Python are good for lowering priority of something. It's not just the GIL, it's that coroutines (by whatever name) are still single-threaded. If you have something long-running CPU-intensive you should probably run it in a background thread (or process) e.g. using an executor.
The common use case is network I/O. But it's quite possible to integrate coroutines with a UI event loop. -- --Guido van Rossum (python.org/~guido)

On 05/01, Guido van Rossum wrote:
On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
So when a generator is used as an iterator, yield and yield from are used to produce the actual working values... But when a generator is used as a coroutine, yield (and yield from?) are used to provide context about when they should be run again? -- ~Ethan~

On Fri, May 1, 2015 at 12:24 PM, Ethan Furman <ethan@stoneleaf.us> wrote:
The common thing is that the *argument* to yield provides info to whoever/whatever is on the other end, and the *return value* from yield [from] is whatever they returned in response. When using yield to implement an iterator, there is no return value from yield -- the other end is the for-loop that calls __next__, and it just says "give me the next value", and the value passed to yield is that next value. When using yield [from] to implement a coroutine the other end is probably a trampoline or scheduler or multiplexer. The argument to yield [from] tells the scheduler what you are waiting for. The scheduler resumes the coroutine when that value is avaiable. At this point please go read Greg Ewing's tutorial. Seriously. http://www.cosc.canterbury.ac.nz/greg.ewing/python/yield-from/yield_from.htm... Note that when using yield from, there is a third player: the coroutine that contains the "yield from". This is neither the scheduler nor the other thing; the communication between the scheduler and the other thing passes transparently *through* this coroutine. When the other thing has a value for this coroutine, it uses *return* to send it a value. The "other thing" here is a lower-level coroutine -- it could either itself also use yield-from and return, or it could be an "I/O primitive" that actually gives the scheduler a specific instruction (e.g. wait until this socket becomes readable). Please do read Greg's tutorial. -- --Guido van Rossum (python.org/~guido)

On Fri, May 1, 2015 at 2:59 PM, Guido van Rossum <guido@python.org> wrote:
I'm not sure what you're talking about. Does "next step" refer to something in the current stack frame or something that you're calling?
The next piece of your algorithm.
If there are more tasks than executors, yield is a way to release your current executor and go to the back of the line. I'm pretty sure I saw several examples of that style back when coroutines were first discussed. -jJ

On Fri, 1 May 2015 13:10:01 -0700 Guido van Rossum <guido@python.org> wrote:
I think Jim is saying that when you have a non-trivial task running in the event loop, you can "yield" from time to time to give a chance to other events (e.g. network events or timeouts) to be processed timely. Of course, that assumes the event loop will somehow priorize them over the just yielded task. Regards Antoine.

On Fri, May 1, 2015 at 1:22 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
Yeah, but (unlike some frameworks) when using asyncio you can't just put a plain "yield" statement in your code. You'd have to do something like `yield from asyncio.sleep(0)`. -- --Guido van Rossum (python.org/~guido)

On Fri, May 1, 2015 at 4:10 PM, Guido van Rossum <guido@python.org> wrote:
Could you dig up the actual references? It seems rather odd to me to mix coroutines and threads this way.
I can try in a few days, but the primary case (and perhaps the only one with running code) was for n_executors=1. They assumed there would only be a single thread, or at least only one that was really important to the event loop -- the pattern was often described as an alternative to relying on threads. FWIW, Ron Adam's "yielding" in https://mail.python.org/pipermail/python-dev/2015-May/139762.html is in the same spirit. You replied it would be better if that were done by calling some method on the scheduling loop, but that isn't any more standard, and the yielding function is simple enough that it will be reinvented. -jJ

On 29 April 2015 at 18:43, Jim J. Jewett <jimjjewett@gmail.com> wrote:
I agree. While I don't use coroutines/asyncio, and I may never do so, I will say that I find Python's approach very difficult to understand. I'd hope that the point of PEP 492, by making await/async first class language constructs, would be to make async programming more accessible in Python. Whether that will actually be the case isn't particularly clear to me. And whether "async programming" and "coroutines" are the same thing, I'm even less sure of. I haven't really followed the discussions here, because they seem to be about details that are completely confusing to me. In principle, I support the PEP, on the basis that working towards better coroutine/async support in Python seems worthwhile to me. But until the whole area is made more accessible to the average programmer, I doubt any of this will be more than a niche area in Python. For example, the PEP says: """ New Coroutine Declaration Syntax The following new syntax is used to declare a coroutine: async def read_data(db): pass """ Looking at the Wikipedia article on coroutines, I see an example of how a producer/consumer process might be written with coroutines: var q := new queue coroutine produce loop while q is not full create some new items add the items to q yield to consume coroutine consume loop while q is not empty remove some items from q use the items yield to produce (To start everything off, you'd just run "produce"). I can't even see how to relate that to PEP 429 syntax. I'm not allowed to use "yield", so should I use "await consume" in produce (and vice versa)? I'd actually expect to just write 2 generators in Python, and use .send() somehow (it's clunky and I can never remember how to write the calls, but that's OK, it just means that coroutines don't have first-class syntax support in Python). This is totally unrelated to asyncio, which is the core use case for all of Python's async support. But it's what I think of when I see the word "coroutine" (and Wikipedia agrees). Searching for "Async await" gets me to the Microsoft page "Asynchronous Programming with Async and Await" describing the C# keywords. That looks more like what PEP 429 is talking about, but it uses the name "async method". Maybe that's what PEP should do, too, and leave the word "coroutine" for the yielding of control that I quoted from Wikipedia above. Confusedly, Paul

Hi Paul, On 2015-04-29 2:26 PM, Paul Moore wrote:
It will make it more accessible in Python. asyncio is getting a lot of traction, and with this PEP accepted I can see it only becoming easier to work with it (or any other async frameworks that start using the new syntax/protocols).
That Wikipedia page is very generic, and the pseudo-code that it uses does indeed look confusing. Here's how it might look like (this is the same pseudo-code but tailored for PEP 492, not a real something) q = asyncio.Queue(maxsize=100) async def produce(): # you might want to wrap it all in 'while True' while not q.full(): item = create_item() await q.put(item) async def consume(): while not q.empty(): item = await q.get() process_item(item) Thanks! Yury

On 29 April 2015 at 19:42, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
I think the "loop" in the Wikipedia pseudocode was intended to be the "while True" here, not part of the "while" on the next line.
Thanks for that. That does look pretty OK. One question, though - it uses an asyncio Queue. The original code would work just as well with a list, or more accurately, something that wasn't designed for async use. So the translation isn't completely equivalent. Also, can I run the produce/consume just by calling produce()? My impression is that with asyncio I need an event loop - which "traditional" coroutines don't need. Nevertheless, the details aren't so important, it was only a toy example anyway. However, just to make my point precise, here's a more or less direct translation of the Wikipedia code into Python. It doesn't actually work, because getting the right combinations of yield and send stuff is confusing to me. Specifically, I suspect that "yield produce.send(None)" isn't the right way to translate "yield to produce". But it gives the idea. data = [1,2,3,4,5,6,7,8,9,10] q = [] def produce(): while True: while len(q) < 10: if not data: return item = data.pop() print("In produce - got", item) q.append(item) yield consume.send(None) total = 0 def consume(): while True: while q: item = q.pop() print("In consume - handling", item) global total total += item yield produce.send(None) # Prime the coroutines produce = produce() consume = consume() next(produce) print(total) The *only* bits of this that are related to coroutines are: 1. yield consume.send(None) (and the same for produce) 2. produce = produce() (and the same for consume) priming the coroutines 3. next(produce) to start the coroutines I don't think this is at all related to PEP 492 (which is about async) but it's what is traditionally meant by coroutines. It would be nice to have a simpler syntax for these "traditional" coroutines, but it's a very niche requirement, and probably not worth it. But the use of "coroutine" in PEP 492 for the functions introduced by "async def" is confusing - at least to me - because I think of the above, and not of async. Why not just call them "async functions" and leave the term coroutine for the above flow control construct, which is where it originated? But maybe that ship has long sailed - the term "coroutine" is pretty entrenched in the asyncio documentation. If so, then I guess we have to live with the consequences. Paul

Paul, On 2015-04-29 3:19 PM, Paul Moore wrote:
Well, yes. Coroutine is a generic term. And you can use PEP 492 coroutines without asyncio, in fact that's how most tests for the reference implementation is written. Coroutine objects have .send(), .throw() and .close() methods (same as generator objects in Python). You can work with them without a loop, but loop implementations contain a lot of logic to implement the actual cooperative execution. You can use generators as coroutines, and nothing would prevent you from doing that after PEP 492, moreover, for some use-cases it might be quite a good decision. But a lot of the code -- web frameworks, network applications, etc will hugely benefit from the proposal, streamlined syntax and async for/with statements. [..]
Everybody is pulling me in a different direction :) Guido proposed to call them "native coroutines". Some people think that "async functions" is a better name. Greg loves his "cofunction" term. I'm flexible about how we name 'async def' functions. I like to call them "coroutines", because that's what they are, and that's how asyncio calls them. It's also convenient to use 'coroutine-object' to explain what is the result of calling a coroutine. Anyways, I'd be OK to start using a new term, if "coroutine" is confusing. Thanks, Yury

On Wed, Apr 29, 2015 at 2:42 PM, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
Anyways, I'd be OK to start using a new term, if "coroutine" is confusing.
According to Wikipedia <http://en.wikipedia.org/wiki/Coroutine>, term "coroutine" was first coined in 1958, so several generations of computer science graduates will be familiar with the textbook definition. If your use of "coroutine" matches the textbook definition of the term, I think you should continue to use it instead of inventing new names which will just confuse people new to Python. Skip

On Wed, Apr 29, 2015 at 1:14 PM, Skip Montanaro <skip.montanaro@gmail.com> wrote:
IIUC the problem is that Python has or will have a number of different things that count as coroutines by that classic CS definition, including generators, "async def" functions, and in general any object that implements the same set of methods as one or both of these objects, or possibly inherits from a certain abstract base class. It would be useful to have some terms to refer specifically to async def functions and the await protocol as opposed to generators and the iterator protocol, and "coroutine" does not make this distinction. -n -- Nathaniel J. Smith -- http://vorpus.org

Maybe it would help to refer to PEP 342, which first formally introduced the concept of coroutines (as a specific use case of generators) in Python. Personally I don't care too much which term the PEP uses, as logn as it defines its terms. The motivation is already clear to me; it's the details that I care about before approving this PEP. On Wed, Apr 29, 2015 at 1:19 PM, Nathaniel Smith <njs@pobox.com> wrote:
-- --Guido van Rossum (python.org/~guido)

Skip Montanaro wrote:
I don't think anything in asyncio or PEP 492 fits that definition directly. Generators and async def functions seem to be what that page calls a "generator" or "semicoroutine": they differ in that coroutines can control where execution continues after they yield, while generators cannot, instead transferring control back to the generator's caller. -- Greg

Hello, On Thu, 30 Apr 2015 18:53:00 +1200 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
But of course it's only a Wikipedia page, which doesn't mean it has to provide complete and well-defined picture, and quality of some (important) Wikipedia pages is indeed pretty poor and doesn't improve. -- Best regards, Paul mailto:pmiscml@gmail.com

On 29 April 2015 at 20:42, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
Everybody is pulling me in a different direction :)
Sorry :-)
If it helps, ignore my opinion - I'm not a heavy user of coroutines or asyncio, so my view shouldn't have too much weight. Thanks for your response - my question was a little off-topic, but your reply has made things clearer for me. Paul

On 29 April 2015 at 20:42, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
I'd like the object created by an 'async def' statement to be called a 'coroutine function' and the result of calling it to be called a 'coroutine'. This is consistent with the usage of 'generator function' and 'generator' has two advantages IMO: - they both would follow the pattern 'X function' is a function statement that when called returns an 'X'. - When the day comes to define generator coroutines, then it will be clear what to call them: 'generator coroutine function' will be the function definition and 'generator coroutine' will be the object it creates. Cheers, -- Arnaud

On 30 April 2015 at 09:50, Arnaud Delobelle <arnodel@gmail.com> wrote:
That would be an improvement over the confusing terminology in the PEP atm. The PEP proposes to name the inspect functions inspect.iscoroutine() and inspect.iscoroutinefunction(). According to the PEP iscoroutine() identifies "coroutine objects" and iscoroutinefunction() identifies "coroutine functions" -- a term which is not defined in the PEP but presumably means what the PEP calls a "coroutine" in the glossary. Calling the async def function an "async function" and the object it returns a "coroutine" makes for the clearest terminology IMO (provided the word coroutine is not also used for anything else). It would help to prevent both experienced and new users from confusing the two related but necessarily distinct concepts. Clearly distinct terminology makes it easier to explain/discuss something if nothing else because it saves repeating definitions all the time. -- Oscar

Hi Oscar, I've updated the PEP with some fixes of the terminology: https://hg.python.org/peps/rev/f156b272f860 I still think that 'coroutine functions' and 'coroutines' is a better pair than 'async functions' and 'coroutines'. First, it's similar to existing terminology for generators. Second, it's less confusing. With pep492 at some point, using generators to implement coroutines won't be a wide spread practice, so 'async def' functions will be the only language construct that returns them. Yury On 2015-05-05 12:01 PM, Oscar Benjamin wrote:

On 5 May 2015 at 17:48, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
I've updated the PEP with some fixes of the terminology: https://hg.python.org/peps/rev/f156b272f860
Yes that looks better.
I still think that 'coroutine functions' and 'coroutines' is a better pair than 'async functions' and 'coroutines'.
Fair enough. The terminology in the PEP seems consistent now which is more important than the exact terms used. -- Oscar

Hello, On Wed, 29 Apr 2015 20:19:40 +0100 Paul Moore <p.f.moore@gmail.com> wrote: []
All this confusion stems from the fact that wikipedia article fails to clearly provide classification dichotomies for coroutines. I suggest reading Lua coroutine description as much better attempt at classification: http://www.lua.org/pil/9.1.html . It for example explicit at mentioning common pitfall: "Some people call asymmetric coroutine semi-coroutines (because they are not symmetrical, they are not really co). However, other people use the same term semi-coroutine to denote a restricted implementation of coroutines". Comparing that to wikipedia article, you'll notice that it uses "semicoroutine" in just one of a sense, and well, different people use "semi" part of a different classification axis. So, trying to draw a table from Lua's text, there're following 2 axes: Axis 1: Symmetric vs Asymmetric Asymmetric coroutines use 2 control flow constructs, akin to subroutine call and return. (Names vary, return is usually called yield.) Symmetric use only one. You can think of symmetric coroutines only call or only return, though more un-confusing term is "switch to". Axis 2: "Lexical" vs "Dynamic" Naming less standardized. Lua calls its coroutines "tru", while other - "generators". Others call them "coroutines" vs "generators". But the real difference is intuitively akin of lexical vs dynamic scoping. "Lexical" coroutines require explicit marking of each (including recursive) call to a coroutine. "Dynamic" do not - you can call a normally looking function, and it suddenly pass control to somewhere else (another coroutine), about which fact you don't have a clue. All *four* recombined types above are coroutines, albeit all with slightly different properties. Symmetric dynamic coroutines are the most powerful type - as powerful as an abyss. They are what is usually used to frighten the innocent. Wikipedia shows you example of them. No sane real-world language uses symmetric coroutines - they're not useful without continuations, and sane real-world people don't want to manage continuations manually. Python, Lua, C# use asymmetric coroutines. Python and C# use asymmetric "lexical" coroutines - the simplest, and thus safest type, but which has limitations wrt to doing mind-boggling things. Lua has "dynamic" asymmetric coroutines - more powerful, and thus more dangerous type (you want to look with jaundiced eye at that guy's framework based on "dynamic" coroutines - you'd better rewrite it from scratch before you trust it). -- Best regards, Paul mailto:pmiscml@gmail.com

Paul Moore wrote:
The Pythonic way to do things like that is to write the producer as a generator, and the consumer as a loop that iterates over it. Or the consumer as a generator, and the producer as a loop that send()s things into it. To do it symmetrically, you would need to write them both as generators (or async def functions or whatever) plus a mini event loop to tie the two together. -- Greg

On 29 April 2015 at 20:19, Paul Moore <p.f.moore@gmail.com> wrote:
Hmm, when I try to fix this "minor" (as I thought!) issue with my code, I discover it's more fundamental. The error I get is Traceback (most recent call last): File ".\coro.py", line 28, in <module> next(produce) File ".\coro.py", line 13, in produce yield consume.send(None) File ".\coro.py", line 23, in consume yield produce.send(None) ValueError: generator already executing What I now realise that means is that you cannot have producer send to consumer which then sends back to producer. That's what the "generator already executing" message means. This is fundamentally different from the "traditional" use of coroutines as described in the Wikipedia article, and as I thought was implemented in Python. The Wikipedia example allows two coroutines to freely yield between each other. Python, on the other hand, does not support this - it requires the mediation of some form of "trampoline" controller (or event loop, in asyncio terms) to redirect control. [1] This limitation of Python's coroutines is not mentioned anywhere in PEP 342, and that's probably why I never really understood Python coroutines properly, as my mental model didn't match the implementation. Given that any non-trivial use of coroutines in Python requires an event loop / trampoline, I begin to understand the logic behind asyncio and this PEP a little better. I'm a long way behind in understanding the details, but at least I'm no longer completely baffled. Somewhere, there should be an explanation of the difference between Python's coroutines and Wikipedia's - I can't be the only person to be confused like this. But I don't think there's any docs covering "coroutines in Python" outside of PEP 342 - the docs just cover the components (the send and throw methods, the yield expression, etc). Maybe it could be covered in the send documentation (as that's what gives the "generator already executing" error. I'll try to work up a doc patch. Actually, looking at the docs, I can't even *find* where the behaviour of the send method is defined - can someone point me in the right direction? Paul [1] It's sort of similar to how Python doesn't do tail call elimination. Symmetric yields rely on stack frames that are no longer needed being discarded if they are to avoid unlimited recursion, so to have symmetric yields, Python would need a form of tail call ("tail yield", I guess :-)) elimination.

Paul Moore wrote:
I agree. While I don't use coroutines/asyncio, and I may never do so, I will say that I find Python's approach very difficult to understand.
Well, I tried to offer something easier to understand. The idea behind PEP 3152 is that writing async code should be just like writing threaded code, except that the suspension points are explicit. But apparently that was too simple, or something.
Aaargh, this is what we get for overloading the word "coroutine". The Wikipedia article is talking about a technique where coroutines yield control to other explicitly identified coroutines. Coroutines in asyncio don't work that way; instead they just suspend themselves, and the event loop takes care of deciding which one to run next.
I can't even see how to relate that to PEP 429 syntax. I'm not allowed to use "yield",
You probably wouldn't need to explicitly yield, since you'd use an asyncio.Queue for passing data between the tasks, which takes care of suspending until data becomes available. You would only need to yield if you were implementing some new synchronisation primitive. Yury's answer to that appears to be that you don't do it with an async def function, you create an object that implements the awaitable-object protocol directly. -- Greg

On 30 April 2015 at 06:39, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yep, I understand that. It's just that that's what I understand by coroutines.
Precisely. As I say, the terminology is probably not going to change now - no big deal in practice. Paul

On Wed, Apr 29, 2015 at 2:26 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 29 April 2015 at 18:43, Jim J. Jewett <jimjjewett@gmail.com> wrote:
So? PEP 492 never says what coroutines *are* in a way that explains why it matters that they are different from generators.
...
I think so ... but the fact that nothing is actually coming via the await channel makes it awkward. I also worry that it would end up with an infinite stack depth, unless the await were actually replaced with some sort of framework-specific scheduling primitive, or one of them were rewritten differently to ensure it returned to the other instead of calling it anew. I suspect the real problem is that the PEP is really only concerned with a very specific subtype of coroutine, and these don't quite fit. (Though it could be done by somehow making them both await on the queue status, instead of on each other.) -jJ

On Thu, Apr 30, 2015 at 10:24 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
I suspect the real problem is that the PEP is really only concerned with a very specific subtype of coroutine, and these don't quite fit.
That's correct. The PEP is concerned with the existing notion of coroutines in Python, which was first introduced by PEP 342: Coroutines via Enhanced Generators. The Wikipedia definition of coroutine (which IIRC is due to Knuth) is quite different and nobody who actually uses the coding style introduced by PEP 342 should mistake one for the other. This same notion of "Pythonic" (so to speak) coroutines was refined by PEP 380, which introduced yield from. It was then *used* in PEP 3156 (the asyncio package) for the specific purpose of standardizing a way to do I/O multiplexing using an event loop. The basic premise of using coroutines with the asyncio package is that most of the time you can write *almost* sequential code as long as you insert "yield from" in front of all blocking operations (and as long as you use blocking operations that are implemented by or on top of the asyncio package). This makes the code easier to follow than code written "traditional" event-loop-based I/O multiplexing (which is heavy on callbacks, or callback-like abstractions like Twisted's Deferred). However, heavy users of the asyncio package (like Yury) discovered some common patterns when using coroutines that were awkward. In particular, "yield from" is quite a mouthful, the coroutine version of a for-loop is awkward, and a with-statement can't have a blocking operation in __exit__ (because there's no explicit yield opcode). PEP 492 proposes a quite simple and elegant solution for these issues. Most of the technical discussion about the PEP is on getting the details right so that users won't have to worry about them, and can instead just continue to write *almost* sequential code when using the asyncio package (or some other framework that offers an event loop integrated with coroutines). -- --Guido van Rossum (python.org/~guido)

Literary critic here. In section "Specification"
The usual phrasing of "strongly suggested" in specifications is "presumes knowledge". Some people think "strongly suggest <do>ing" is presumptuous and condescending, YMMV. Also, the relationship to PEP 3152 should be mentioned IMO. I propose: This specification presumes knowledge of the implementation of coroutines in Python (PEP 342 and PEP 380). Motivation for the syntax changes proposed here comes from the asyncio framework (PEP 3156) and the "Cofunctions" proposal (PEP 3152, now rejected in favor of this specification). I'm not entirely happy with my phrasing, because there are at least four more or less different concepts that might claim the bare word "coroutine": - this specification - the implementation of this specification - the syntax used to define coroutines via PEPs 342 and 380 - the semantics of PEP 342/380 coroutines In both your original and my rephrasing, the use of "coroutine" violates your convention that it refers to the PEP's proposed syntax for coroutines. Instead it refers to the semantics of coroutines implemented via PEP 342/380. This is probably the same concern that motivated Guido's suggestion to use "native coroutines" for the PEP 492 syntax (but I'm not Dutch, so maybe they're not the same :-). I feel this is a real hindrance to understanding for someone coming to the PEP for the first time. You know which meaning of coroutine you mean, but the new reader needs to think hard enough to disambiguate every time the word occurs. If people agree with me, I could go through the PEP and revise mentions of "coroutine" in "disambiguated" style. In section "Comprehensions":
Don't invite trouble.<wink /> How about: Syntax for asynchronous comprehensions could be provided, but this construct is outside of the scope of this PEP. In section "Async lambdas"
Same recommendation as for "Comprehensions". I wouldn't mention the tentative syntax, it is both obvious and inviting to trouble.
A partial list of commentators I've found to be notable, YMMV: Greg Ewing for PEP 3152 and his Loyal Opposition to this PEP. Mark Shannon's comments have led to substantial clarifications of motivation for syntax, at least in my mind. Paul Sokolovsky for information about the MicroPython implementation.

Hi Stephen, Thanks a lot for the feedback and suggestions. I'll apply them to the PEP. On 2015-04-28 11:03 PM, Stephen J. Turnbull wrote:
Your wording is 100% better and it's time to mention PEP 3152 too.
I also like Guido's suggestion to use "native coroutine" term. I'll update the PEP (I have several branches of it in the repo that I need to merge before the rename).
Agree. Do you think it'd be better to combine comprehensions and async lambdas in one section?
Sure! I was going to add everybody after the PEP is accepted/rejected/postponed.
Thanks! Yury

Yury Selivanov wrote:
I'd still prefer to avoid use of the word "coroutine" altogether as being far too overloaded. I think even the term "native coroutine" leaves room for ambiguity. It's not clear to me whether you intend it to refer only to functions declared with 'async def', or to any function that returns an awaitable object. The term "async function" seems like a clear and unabmigious way to refer to the former. I'm not sure what to call the latter. -- Greg

Yury Selivanov schrieb am 28.04.2015 um 05:07:
e) Should we add a coroutine ABC (for cython etc)?
Sounds like the right thing to do, yes. IIUC, a Coroutine would be a new stand-alone ABC with send, throw and close methods. Should a Generator then inherit from both Iterator and Coroutine, or would that counter your intention to separate coroutines from generators as a concept? I mean, they do share the same interface ... It seems you're already aware of https://bugs.python.org/issue24018 Stefan

On 28 Apr 2015, at 5:07, Yury Selivanov wrote:
Does this mean it's not possible to implement an async version of os.walk() if we had an async version of os.listdir()? I.e. for async code we're back to implementing iterators "by hand" instead of using generators for it.
[...]
Servus, Walter

Inline comments below... On Mon, Apr 27, 2015 at 8:07 PM, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
check for a generalized awaitable rather than specifically a coroutine.
these methods at all but given the implementation tactic for coroutines that may not be possible, so the nearest approximation is TypeError. (Also, NotImplementedError is typically to indicate that a subclass should implement it.)
Sounds like Stefan agrees. Are you aware of http://bugs.python.org/issue24018 (Generator ABC)?
I also hope that if someone has their own (renamed) copy of asyncio that works with 3.4, it will all still work with 3.5. Even if asyncio itself is provisional, none of the primitives (e.g. yield from) that it is built upon are provisional, so there should be no reason for it to break in 3.5.
I wonder if you could add some adaptation of the explanation I have posted (a few times now, I feel) for the reason why I prefer to suspend only at syntactically recognizable points (yield [from] in the past, await and async for/with in this PEP). Unless you already have it in the rationale (though it seems Mark didn't think it was enough :-).
Implementation changes don't need to go through the PEP process, unless they're really also interface changes.
Despite reading this I still get confused when reading the PEP (probably because asyncio uses "coroutine" in the latter sense). Maybe it would make sense to write "native coroutine" for the new concept, to distinguish the two concepts more clearly? (You could even change "awaitable" to "coroutine". Though I like "awaitable" too.)
``yield`` are disallowed syntactically outside functions (as are the syntactic constraints on ``await` and ``async for|def``). place. placed in an 'else' clause on the inner try? (There may well be a reason but I can't figure what it is, and PEP 343 doesn't seem to explain it.) Also, it's a shame we're perpetuating the sys.exc_info() triple in the API here, but I agree making __exit__ and __aexit__ different also isn't a great idea. :-( PS. With the new tighter syntax for ``await`` you don't need the ``exit_res`` variable any more.
(Also, the implementation of the latter is problematic -- check asyncio/locks.py and notice that __enter__ is empty...)
Have you considered making __aiter__ not an awaitable? It's not strictly necessary I think, one could do all the awaiting in __anext__. Though perhaps there are use cases that are more naturally expressed by awaiting in __aiter__? (Your examples all use ``async def __aiter__(self): return self`` suggesting this would be no great loss.)
it's about stopping an async iteration, but in my head I keep referring to it as AsyncStopIteration, probably because in other places we use async (or 'a') as a prefix.
How does ``yield from`` know that it is occurring in a generator-based coroutine?
3. ``yield from`` does not accept coroutine objects from plain Python generators (*not* generator-based coroutines.)
I am worried about this. PEP 380 gives clear semantics to "yield from <generator>" and I don't think you can revert that here. Or maybe I am misunderstanding what you meant here? (What exactly are "coroutine objects from plain Python generators"?)
Does send() make sense for a native coroutine? Check PEP 380. I think the only way to access the send() argument is by using ``yield`` but that's disallowed. Or is this about send() being passed to the ``yield`` that ultimately suspends the chain of coroutines? (You may just have to rewrite the section about that -- it seems a bit hidden now.)
(I'm not sure of the utility of this section.)
(This line seems redundant.)
def __a*__ return awaitable await
def __await__ yield, yield from, return iterable await
Is this still true with the proposed restrictions on what ``yield from`` accepts? (Hopefully I'm the one who is confused. :-)
-- --Guido van Rossum (python.org/~guido)

Hi Guido, Thank you for a very detailed review. Comments below: On 2015-04-28 5:49 PM, Guido van Rossum wrote:
My main question here is it OK to reuse 'tp_reserved' (former tp_compare)? I had to remove this check: https://github.com/1st1/cpython/commit/4be6d0a77688b63b917ad88f09d446ac3b7e2... On the other hand I think that it's a slightly better solution than adding a new slot.
Great! The current grammar requires parentheses for consequent await expressions: await (await coro()) I can change this (in theory), but I kind of like the parens in this case -- better readability. And it'll be a very rare case.
It's important to at least have 'iscoroutine' -- to check that the object is a coroutine function. A typical use-case would be a web framework that lets you to bind coroutines to specific http methods/paths: @http.get('/spam') async def handle_spam(request): ... 'http.get' decorator will need a way to raise an error if it's applied to a regular function (while the code is being imported, not in runtime). The idea here is to cover all kinds of python objects in inspect module, it's Python's reflection API. The other thing is that it's easy to implement this function for CPython: just check for CO_COROUTINE flag. For other Python implementations it might be a different story. (More arguments for isawaitable() below)
Agree.
I'll experiment with replacing (c) with a warning. We can disable __iter__ and __next__ for coroutines, but allow to use 'yield from' on them. Would it be a better approach?
(d) can also break something (hypothetically). I'm not sure why would someone use isgenerator() and isgeneratorfunction() on generator-based coroutines in code based on asyncio, but there is a chance that someone did (it should be trivial to fix the code). Same for iter() and next(). The chance is slim, but we may break some obscure code. Are you OK with this?
Yes, I saw the issue. I'll review it in more detail before thinking about Coroutine ABC for the next PEP update.
I agree. I'll try warnings for yield-fromming coroutines from regular generators (so that we can disable it in 3.7/3.6). *If that doesn't work*, I think we need a compromise (not ideal, but breaking things is worse): - yield from would always accept coroutine-objects - iter(), next(), tuple(), etc won't work on coroutine-objects - for..in won't work on coroutine-objects
I'll see what I can do.
"awaitable" is a more generic term... It can be a future, or it can be a coroutine. Mixing them in one may create more confusion. Also, "awaitable" is more of an interface, or a trait, which means that the object won't be rejected by the 'await' expression. I like your 'native coroutine' suggestion. I'll update the PEP.
OK
Good catch.
Yes, this can be simplified. It was indeed copied from PEP 343.
There is a section in Design Considerations about this. I should add a reference to it.
I'd be totally OK with that. Should I rename it?
I think it's a mistake that a lot of beginners may make at some point (and in this sense it's frequent). I really doubt that once you were hit by it more than two times you would make it again. This is a small wart, but we have to have a solution for it.
Will add a subsection specifically for them.
I think that isawaitable would be really useful. Especially, to check if an object implemented with C API has a tp_await function. isawaitablefunction() looks a bit confusing to me: def foo(): return fut is awaitable, but there is no way to detect that. def foo(arg): if arg == 'spam': return fut is awaitable sometimes.
I check that in 'ceval.c' in the implementation of YIELD_FROM opcode. If the current code object doesn't have a CO_COROUTINE flag and the opcode arg is a generator-object with CO_COROUTINE -- we raise an error.
# *Not* decorated with @coroutine def some_algorithm_impl(): yield 1 yield from native_coroutine() # <- this is a bug "some_algorithm_impl" is a regular generator. By mistake someone could try to use "yield from" on a native coroutine (which is 99.9% is a bug). So we can rephrase it to: ``yield from`` does not accept *native coroutine objects* from regular Python generators I also agree that raising an exception in this case in 3.5 might break too much existing code. I'll try warnings, and if it doesn't work we might want to just let this restriction slip.
Yes, 'send()' is needed to push values to the 'yield' statement somewhere (future) down the chain of coroutines (suspension point). This has to be articulated in a clear way, I'll think how to rewrite this section without replicating PEP 380 and python documentation on generators.
It's a little bit hard to understand that "awaitable" is a general term that includes native coroutine objects, so it's OK to write both: def __aenter__(): return fut async def __aenter__(): ... We (Victor and I) decided that it might be useful to have an additional section that explains it.
True for the code that uses @coroutine decorators properly. I'll see what I can do with warnings, but I'll update the section anyways.
Are you OK with this thing?

On Tue, Apr 28, 2015 at 4:55 PM, Ethan Furman <ethan@stoneleaf.us> wrote:
You could at least provide an explanation about how the current proposal falls short. What code will break? There's a cost to __future__ imports too. The current proposal is a pretty clever hack -- and we've done similar hacks in the past (last I remember when "import ... as ..." was introduced but we didn't want to make 'as' a keyword right away). -- --Guido van Rossum (python.org/~guido)

Guido van Rossum wrote:
There's a benefit to having a __future__ import beyond avoiding hackery: by turning on the __future__ you can find out what will break when they become real keywords. But I suppose that could be achieved by having both the hack *and* the __future__ import available. -- Greg

Guido, I found a solution how to disable 'yield from', iter()/tuple() and 'for..in' on native coroutines with 100% backwards compatibility. The idea is to add one more code object flag: CO_NATIVE_COROUTINE, which will be applied, along with CO_COROUTINE to all 'async def' functions. This way: 1. old generator-based coroutines from asyncio are awaitable, because of CO_COROUTINE flag (that asyncio.coroutine decorator will set with 'types.coroutine'). 2. new 'async def' functions are awaitable because of CO_COROUTINE flag. 3. GenObject __iter__ and __next__ raise error *only* if it has CO_NATIVE_COROUTINE flag. So iter(), next(), for..in aren't supported only for 'async def' functions (but will work ok on asyncio generator-based coroutines) 4. 'yield from' *only* raises an error if it yields a *coroutine with a CO_NATIVE_COROUTINE* from a regular generator. Thanks, Yury On 2015-04-28 7:26 PM, Yury Selivanov wrote:

Yury Selivanov wrote:
What about new 'async def' code called by existing code that expects to be able to use iter() or next() on the future objects it receives?
Won't that prevent some existing generator-based coroutines (ones not decorated with @coroutine) from calling ones implemented with 'async def'? -- Greg

On 2015-04-29 5:13 AM, Greg Ewing wrote:
It would. But that's not a backwards compatibility issue. Everything will work in 3.5 without a single line change. If you want to use new coroutines - use them, everything will work too. If, however, during the refactoring you've missed several generator-based coroutines *and* they are not decorated with @coroutine - then yes, you will get a runtime error. I see absolutely no problem with that. It's a small price to pay for a better design. Yury

Yury Selivanov wrote:
It seems to go against Guido's desire for the new way to be a 100% drop-in replacement for the old way. There are various ways that old code can end up calling new code -- subclassing, callbacks, etc. It also means that if person A writes a library in the new style, then person B can't make use of it without upgrading all of their code to the new style as well. The new style will thus be "infectious" in a sense. I suppose it's up to Guido to decide whether it's a good or bad infection. But the same kind of reasoning seemed to be at least partly behind the rejection of PEP 3152. -- Greg

Greg, On 2015-04-29 6:46 PM, Greg Ewing wrote:
It's a drop-in replacement ;) If you run your existing code - it will 100% work just fine. There is a probability that *when* you start applying new syntax something could go wrong -- you're right here. I'm updating the PEP to explain this clearly, and let's see what Guido thinks about that. My opinion is that this is a solvable problem with a clear guidelines on how to transition existing code to the new style. Thanks, Yury

Yury Selivanov wrote:
But isn't that too restrictive? Any function that returns an awaitable object would work in the above case.
What about when you change an existing non-suspendable function to make it suspendable, and have to deal with the ripple-on effects of that? Seems to me that affects everyone, not just beginners.
So what you really mean is "yield-from, when used inside a function that doesn't have @coroutine applied to it, will not accept a coroutine object", is that right? If so, I think this part needs re-wording, because it sounded like you meant something quite different. I'm not sure I like this -- it seems weird that applying a decorator to a function should affect the semantics of something *inside* the function -- especially a piece of built-in syntax such as 'yield from'. It's similar to the idea of replacing 'async def' with a decorator, which you say you're against. BTW, by "coroutine object", do you mean only objects returned by an async def function, or any object having an __await__ method? I think a lot of things would be clearer if we could replace the term "coroutine object" with "awaitable object" everywhere.
``yield from`` does not accept *native coroutine objects* from regular Python generators
It's the "from" there that's confusing -- it sounds like you're talking about where the argument to yield-from comes from, rather than where the yield-from expression resides. In other words, we though you were proposing to disallow *this*: # *Not* decorated with @coroutine def some_algorithm_impl(): yield 1 yield from iterator_implemented_by_generator() I hope to agree that this is a perfectly legitimate thing to do, and should remain so? -- Greg

Greg, On 2015-04-29 5:12 AM, Greg Ewing wrote:
It's just an example. All in all, I think that we should have full coverage of python objects in the inspect module. There are many possible use cases besides the one that I used -- runtime introspection, reflection, debugging etc, where you might need them.
I've been using coroutines on a daily basis for 6 or 7 years now, long before asyncio we had a coroutine-based framework at my firm (yield + trampoline). Neither I nor my colleagues had any problems with refactoring the code. I really try to speak from my experience when I say that it's not that big of a problem. Anyways, the PEP provides set_coroutine_wrapper which should solve the problem.
This is for the transition period. We don't want to break existing asyncio code. But we do want coroutines to be a separate concept from generators. It doesn't make any sense to iterate through coroutines or to yield-from them. We can deprecate @coroutine decorator in 3.6 or 3.7 and at some time remove it.
The PEP clearly separates awaitbale from coroutine objects. - coroutine object is returned from coroutine call. - awaitable is either a coroutine object or an object with __await__. list(), tuple(), iter(), next(), for..in etc. won't work on objects with __await__ (unless they implement __iter__). The problem I was discussing is about specifically 'yield from' and 'coroutine object'.
Sure it's perfectly normal ;) I apologize for the poor wording. Yury

On 29/04/2015 9:49 a.m., Guido van Rossum wrote:
That seems unavoidable if the goal is for 'await' to only work on generators that are intended to implement coroutines, and not on generators that are intended to implement iterators. Because there's no way to tell them apart without marking them in some way. -- Greg

On 2015-04-28 11:59 PM, Greg wrote:
Not sure what you mean by "unavoidable". Before the last revision of the PEP it was perfectly fine to use generators in 'yield from' in generator-based coroutines: @asyncio.coroutine def foo(): yield from gen() and yet you couldn't do the same with 'await' (as it has a special opcode instead of GET_ITER that can validate what you're awaiting). With the new version of the PEP - 'yield from' in foo() would raise a TypeError. If we change it to a RuntimeWarning then we're safe in terms of backwards compatibility. I just want to see how exactly warnings will work (i.e. will they occur multiple times at the same 'yield from' expression, etc) Yury

Yury Selivanov wrote:
Guido is worried about existing asyncio-based code that doesn't always decorate its generators with @coroutine. If I understand correctly, if you have @coroutine def coro1(): yield from coro2() def coro2(): yield from ... then coro1() would no longer work. In other words, some currently legitimate asyncio-based code will break under PEP 492 even if it doesn't use any PEP 492 features. What you seem to be trying to do here is catch the mistake of using a non-coroutine iterator as if it were a coroutine. By "unavoidable" I mean I can't see a way to achieve that in all possible permutations without giving up some backward compatibility. -- Greg

Guido van Rossum wrote:
+1, that seems more consistent to me too.
I think that's a red herring in relation to the reason for StopAsyncIteration/AsyncStopIteration being needed. The real reason is that StopIteration is already being used to signal returning a value from an async function, so it can't also be used to signal the end of an async iteration.
I think we need some actual evidence before we can claim that one of these mistakes is more easily made than the other. A priori, I would tend to assume that failing to use 'await' when it's needed would be the more insidious one. If you mistakenly treat the return value of a function as a future when it isn't one, you will probably find out about it pretty quickly even under the old regime, since most functions don't return iterators. On the other hand, consider refactoring a function that was previously not a coroutine so that it now is. All existing calls to that function now need to be located and have either 'yield from' or 'await' put in front of them. There are three possibilities: 1. The return value is not used. The destruction-before- iterated-over heuristic will catch this (although since it happens in a destructor, you won't get an exception that propagates in the usual way). 2. Some operation is immediately performed on the return value. Most likely this will fail, so you will find out about the problem promptly and get a stack trace, although the error message will be somewhat tangentially related to the cause. 3. The return value is stored away for later use. Some time later, an operation on it will fail, but it will no longer be obvious where the mistake was made. So it's all a bit of a mess, IMO. But maybe it's good enough. We need data. How often have people been bitten by this kind of problem, and how much trouble did it cause them?
That's made me think of something else. Suppose you want to suspend execution in an 'async def' function -- how do you do that if 'yield' is not allowed? You may need something like the suspend() primitive that I was thinking of adding to PEP 3152.
I don't see how this is different from an 'async def' function always returning an awaitable object, or a new awaitable object being created on each 'async def' function invocation. Sounds pretty much isomorphic to me. -- Greg

Greg, On 2015-04-29 5:12 AM, Greg Ewing wrote:
When we start thinking about generator-coroutines (the ones that combine 'await' and 'async yield'-something), we'll have to somehow multiplex them to the existing generator object (at least that's one way to do it). StopIteration is already extremely loaded with different special meanings. [..]
We do this in asyncio with Futures. We never combine 'yield' and 'yield from' in a @coroutine. We don't need 'suspend()'. If you need suspend()-like thing in your own framework, implement an object with an __await__ method and await on it.
Agree. I'll try to reword that section. Thanks, Yury

On Tue Apr 28 23:49:56 CEST 2015, Guido van Rossum quoted PEP 492:
So? PEP 492 never says what coroutines *are* in a way that explains why it matters that they are different from generators. Do you really mean "coroutines that can be suspended while they wait for something slow"? As best I can guess, the difference seems to be that a "normal" generator is using yield primarily to say: "I'm not done; I have more values when you want them", but an asynchronous (PEP492) coroutine is primarily saying: "This might take a while, go ahead and do something else meanwhile."
Does it really permit *making* them, or does it just signal that you will be waiting for them to finish processing anyhow, and it doesn't need to be a busy-wait? As nearly as I can tell, "async with" doesn't start processing the managed block until the "asynchronous" call finishes its work -- the only point of the async is to signal a scheduler that the task is blocked. Similarly, "async for" is still linearized, with each step waiting until the previous "asynchronous" step was not merely launched, but fully processed. If anything, it *prevents* within-task parallelism.
What justifies this limitation? Is there anything wrong awaiting something that eventually uses "return" instead of "yield", if the "this might take a while" signal is still true? Is the problem just that the current implementation might not take proper advantage of task-switching?
What would be wrong if a class just did __await__ = __anext__ ? If the problem is that the result of __await__ should be iterable, then why isn't __await__ = __aiter__ OK?
Does that mean "The ``await`` keyword has slightly higher precedence than ``yield``, so that fewer expressions require parentheses"?
Other than the arbitrary "keyword must be there" limitations imposed by this PEP, how is that different from: class AsyncContextManager: async def __aenter__(self): log('entering context') or even: class AsyncContextManager: def __aenter__(self): log('entering context') Will anything different happen when calling __aenter__ or log? Is it that log itself now has more freedom to let other tasks run in the middle?
Why? Does that just mean they won't take advantage of the freedom you offered them? Or are you concerned that they are more likely to cooperate badly with the scheduler in practice?
The same questions about why -- what is the harm?
Again, I don't see what this buys you except that a scheduler has been signaled that it is OK to pre-empt between rows. That is worth signaling, but I don't see why a regular iterator should be forbidden.
So the decision is made at compile-time, and can't be turned on later? Then what is wrong with just offering an alternative @coroutine that can be used to override the builtin? Or why not just rely on set_coroutine_wrapper entirely, and simply set it to None (so no wasted wrappings) by default? -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

Hi Jim, On 2015-04-29 1:43 PM, Jim J. Jewett wrote:
Correct.
I does.
Right.
It enables cooperative parallelism.
We want to avoid people passing regular generators and random objects to 'await', because it is a bug.
If it's an 'async def' then sure, you can use it in await.
For coroutines in PEP 492: __await__ = __anext__ is the same as __call__ = __next__ __await__ = __aiter__ is the same as __call__ = __iter__
This is OK. The point is that you can use 'await log' in __aenter__. If you don't need awaits in __aenter__ you can use them in __aexit__. If you don't need them there too, then just define a regular context manager.
__aenter__ must return an awaitable.
Not sure I understand the question. It doesn't make any sense in using 'async with' outside of a coroutine. The interpeter won't know what to do with them: you need an event loop for that.
It's not about signaling. It's about allowing cooperative scheduling of long-running processes.
It is set to None by default. Will clarify that in the PEP. Thanks, Yury

On Wed Apr 29 20:06:23 CEST 2015,Yury Selivanov replied:
As best I can guess, the difference seems to be that a "normal" generator is using yield primarily to say:
"I'm not done; I have more values when you want them",
but an asynchronous (PEP492) coroutine is primarily saying:
"This might take a while, go ahead and do something else meanwhile."
Correct.
Then I strongly request a more specific name than coroutine. I would prefer something that refers to cooperative pre-emption, but I haven't thought of anything that is short without leading to other types of confusion. My least bad idea at the moment would be "self-suspending coroutine" to emphasize that suspending themselves is a crucial feature. Even "PEP492-coroutine" would be an improvement.
I does.
Bad phrasing on my part. Is there anything that prevents an asynchronous call (or waiting for one) without the "async with"? If so, I'm missing something important. Either way, I would prefer different wording in the PEP.
What justifies this limitation?
We want to avoid people passing regular generators and random objects to 'await', because it is a bug.
Why? Is it a bug just because you defined it that way? Is it a bug because the "await" makes timing claims that an object not making such a promise probably won't meet? (In other words, a marker interface.) Is it likely to be a symptom of something that wasn't converted correctly, *and* there are likely to be other bugs caused by that same lack of conversion?
For coroutines in PEP 492:
__await__ = __anext__ is the same as __call__ = __next__ __await__ = __aiter__ is the same as __call__ = __iter__
That tells me that it will be OK sometimes, but will usually be either a mistake or an API problem -- and it explains why. Please put those 3 lines in the PEP.
Is it an error to use "async with" on a regular context manager? If so, why? If it is just that doing so could be misleading, then what about "async with mgr1, mgr2, mgr3" -- is it enough that one of the three might suspend itself?
__aenter__ must return an awaitable
Why? Is there a fundamental reason, or it is just to avoid the hassle of figuring out whether or not the returned object is a future that might still need awaiting? Is there an assumption that the scheduler will let the thing-being awaited run immediately, but look for other tasks when it returns, and a further assumption that something which finishes the whole task would be too slow to run right away?
So does the PEP also provide some way of ensuring that there is an event loop? Does it assume that self-suspending coroutines will only ever be called by an already-running event loop compatible with asyncio.get_event_loop()? If so, please make these contextual assumptions explicit near the beginning of the PEP.
The same questions about why -- what is the harm?
I can imagine that as an implementation detail, the async for wouldn't be taken advtange of unless it was running under an event loop that knew to look for "aync for" as suspension points. I'm not seeing what the actual harm is in either not happening to suspend (less efficient, but still correct), or in suspending between every step of a regular iterator (because, why not?)
(1) How does this differ from the existing asynchio.coroutine? (2) Why does it need to have an environment variable? (Sadly, the answer may be "backwards compatibility", if you're really just specifying the existing asynchio interface better.) (3) Why does it need [set]get_coroutine_wrapper, instead of just setting the asynchio.coroutines.coroutine attribute? (4) Why do the get/set need to be in sys? Is the intent to do anything more than preface execution with: import asynchio.coroutines asynchio.coroutines._DEBUG = True -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

Jim, On 2015-04-30 2:41 PM, Jim J. Jewett wrote: [...]
Yes, you can't use 'yield from' in __exit__/__enter__ in current Python.
Same as 'yield from' is expecting an iterable, await is expecting an awaitable. That's the protocol. You can't pass random objects to 'with' statements, 'yield from', 'for..in', etc. If you write def gen(): yield 1 await gen() then it's a bug.
There is a line like that: https://www.python.org/dev/peps/pep-0492/#await-expression Look for "Also, please note..." line.
'with' requires an object with __enter__ and __exit__ 'async with' requires an object with __aenter__ and __aexit__ You can have an object that implements both interfaces.
The fundamental reason why 'async with' is proposed is because you can't suspend execution in __enter__ and __exit__. If you need to suspend it there, use 'async with' and its __a*__ methods, but they have to return awaitable (see https://www.python.org/dev/peps/pep-0492/#new-syntax and look what 'async with' is semantically equivalent to)
You need some kind of loop, but it doesn't have to the one from asyncio. There is at least one place in the PEP where it's mentioned that the PEP introduses a generic concept that can be used by asyncio *and* other frameworks.
Event loop doesn't need to know anything about 'async with' and 'async for'. For loop it's always one thing -- something is awaiting somewhere for some result.
That section describes some hassles we had in asyncio to enable better debugging. (3) because it allows to enable debug selectively when we need it (4) because it's where functions like 'set_trace' live. set_coroutine_wrapper() also requires some modifications in the eval loop, so sys looks like the right place.
This won't work, unfortunately. You need to set the debug flag *before* you import asyncio package (otherwise we would have an unavoidable performance cost for debug features). If you enable it after you import asyncio, then asyncio itself won't be instrumented. Please see the implementation of asyncio.coroutine for details. set_coroutine_wrapper solves these problems. Yury

On Thu Apr 30 21:27:09 CEST 2015, Yury Selivanov replied: On 2015-04-30 2:41 PM, Jim J. Jewett wrote:
Bad phrasing on my part. Is there anything that prevents an asynchronous call (or waiting for one) without the "async with"?
If so, I'm missing something important. Either way, I would prefer different wording in the PEP.
Yes, you can't use 'yield from' in __exit__/__enter__ in current Python.
I tried it in 3.4, and it worked. I'm not sure it would ever be sensible, but it didn't raise any errors, and it did run. What do you mean by "can't use"?
That tells me that it will be OK sometimes, but will usually be either a mistake or an API problem -- and it explains why.
Please put those 3 lines in the PEP.
It was from reading the PEP that the question came up, and I just reread that section. Having those 3 explicit lines goes a long way towards explaining how an asychio coroutine differs from a regular callable, in a way that the existing PEP doesn't, at least for me.
'with' requires an object with __enter__ and __exit__
'async with' requires an object with __aenter__ and __aexit__
You can have an object that implements both interfaces.
I'm not still not seeing why with (let alone await with) can't just run whichever one it finds. "await with" won't actually let the BLOCK run until the future is resolved. So if a context manager only supplies __enter__ instead of __aenter__, then at most you've lost a chance to switch tasks while waiting -- and that is no worse than if the context manager just happened to be really slow.
For debugging this kind of mistakes there is a special debug mode in
Is the intent to do anything more than preface execution with:
import asynchio.coroutines asynchio.coroutines._DEBUG = True
Why does asynchio itself have to wrapped? Is that really something normal developers need to debug, or is it only for developing the stdlib itself? If it if only for developing the stdlib, than I would rather see workarounds like shoving _DEBUG into builtins when needed, as opposed to adding multiple attributes to sys. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

On 2015-05-01 5:37 PM, Jim J. Jewett wrote:
It probably executed without errors, but it didn't run the generators. class Foo: def __enter__(self): yield from asyncio.sleep(0) print('spam') with Foo(): pass # <- 'spam' won't ever be printed.
let's say you have a function: def foo(): with Ctx(): pass if Ctx.__enter__ is a generator/coroutine, then foo becomes a generator/coroutine (otherwise how (and to what) would you yield from/await on __enter__?). And then suddenly calling 'foo' doesn't do anything (it will return you a generator/coroutine object). This isn't transparent or even remotely understandable.
Yes, normal developers need asyncio to be instrumented, otherwise you won't know what you did wrong when you called some asyncio code without 'await' for example. Yury

On Fri May 1 23:58:26 CEST 2015, Yury Selivanov wrote:
Yes, you can't use 'yield from' in __exit__/__enter__ in current Python.
What do you mean by "can't use"?
It probably executed without errors, but it didn't run the generators.
True. But it did return the one created by __enter__, so it could be bound to a variable and iterated within the block. There isn't an easy way to run the generator created by __exit__, and I'm not coming up with any obvious scenarios where it would be a sensible thing to do (other than using "with" on a context manager that *does* return a future instead of finishing). That said, I'm still not seeing why the distinction is so important that we have to enforce it at a language level, as opposed to letting the framework do its own enforcement. (And if the reason is performance, then make the checks something that can be turned off, or offer a fully instrumented loop as an alternative for debugging.)
If you enable it after you import asyncio, then asyncio itself won't be instrumented.
I'll trust you that it *does* work that way, but this sure sounds to me as though the framework isn't ready to be frozen with syntax, and maybe not even ready for non-provisional stdlib inclusion. I understand that the disconnected nature of asynchronous tasks makes them harder to debug. I heartily agree that the event loop should offer some sort of debug facility to track this. But the event loop is supposed to be pluggable. Saying that this requires not merely a replacement, or even a replacement before events are added, but a replacement made before python ever even loads the default version ... That seems to be much stronger than sys.settrace -- more like instrumenting the ceval loop itself. And that is something that ordinary developers shouldn't have to do. -jJ -- If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ

On Thu, Apr 30, 2015 at 11:41 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
This seems so vague as to be useless to me. When using generators to implement iterators, "yield" very specifically means "here is the next value in the sequence I'm generating". (And to indicate there are no more values you have to use "return".)
Actually that's not even wrong. When using generators as coroutines, PEP 342 style, "yield" means "I am blocked waiting for a result that the I/O multiplexer is eventually going to produce". The argument to yield tells the multiplexer what the coroutine is waiting for, and it puts the generator stack frame on an appropriate queue. When the multiplexer has obtained the requested result it resumes the coroutine by using send() with that value, which resumes the coroutine/generator frame, making that value the return value from yield. Read Greg Ewing's tutorial for more color: http://www.cosc.canterbury.ac.nz/greg.ewing/python/yield-from/yield_from.htm... Then I strongly request a more specific name than coroutine.
No, this is the name we've been using since PEP 342 and it's still the same concept. -- --Guido van Rossum (python.org/~guido)

On Thu, 30 Apr 2015 12:32:02 -0700 Guido van Rossum <guido@python.org> wrote:
No, this is the name we've been using since PEP 342 and it's still the same concept.
The fact that all syntax uses the word "async" and not "coro" or "coroutine" hints that it should really *not* be called a coroutine (much less a "native coroutine", which both silly and a lie). Why not "async function"? Regards Antoine.

It is spelled "Raymond Luxury-Yacht", but it's pronounced "Throatwobbler Mangrove". :-) I am actually fine with calling a function defined with "async def ..." an async function, just as we call a function containing "yield" a generator function. However I prefer to still use "coroutine" to describe the concept implemented by async functions. *Some* generator functions also implement coroutines; however I would like to start a movement where eventually we'll always be using async functions when coroutines are called for, dedicating generators once again to their pre-PEP-342 role of a particularly efficient way to implement iterators. Note that I'm glossing over the distinction between yield and yield-from here; both can be used to implement the coroutine pattern, but the latter has some advantages when the pattern is used to support an event loop: most importantly, when using yield-from-style coroutines, a coroutine can use return to pass a value directly to the stack frame that is waiting for its result. Prior to PEP 380 (yield from), the trampoline would have to be involved in this step, and there was no standard convention for how to communicate the final result to the trampoline; I've seen "returnValue(x)" (Twisted inlineCallbacks), "raise ReturnValue(x)" (Google App Engine NDB), "yield Return(x)" (Monocle) and I believe I've seen plain "yield x" too (the latter two being abominations in my mind, since it's unclear whether the generator is resumed after s value-returning yield). While yield-from was an improvement over plain yield, await is an improvement over yield-from. As with most changes to Python (as well as natural evolution), an improvement often leads the way to another improvement -- one that wasn't obvious before. And that's fine. If I had laid awake worrying about the best way to spell async functions while designing asyncio, PEP 3156 probably still wouldn't have been finished today. On Thu, Apr 30, 2015 at 12:40 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
-- --Guido van Rossum (python.org/~guido)

On 30 April 2015 at 20:32, Guido van Rossum <guido@python.org> wrote:
However, it is (as I noted in my other email) not very well documented. There isn't a glossary entry in the docs for "coroutine", and there's nothing pointing out that coroutines need (for anything other than toy cases) an event loop, trampoline, or IO multiplexer (call it what you want, although I prefer terms that don't make it sound like it's exclusively about IO). I'll raise an issue on the tracker for this, and I'll see if I can write up something. Once there's a non-expert's view in the docs, the experts can clarify the technicalities if I get them wrong :-) I propose a section under https://docs.python.org/3/reference/expressions.html#yield-expressions describing coroutines, and their usage. Paul

On Thu, Apr 30, 2015 at 3:32 PM, Guido van Rossum <guido@python.org> wrote: (me:)
but an asynchronous (PEP492) coroutine is primarily saying:
"This might take a while, go ahead and do something else meanwhile."
(Yuri:) Correct. (Guido:)> Actually that's not even wrong. When using generators as coroutines, PEP 342
style, "yield" means "I am blocked waiting for a result that the I/O multiplexer is eventually going to produce".
So does this mean that yield should NOT be used just to yield control if a task isn't blocked? (e.g., if its next step is likely to be long, or low priority.) Or even that it wouldn't be considered a co-routine in the python sense? If this is really just about avoiding busy-wait on network IO, then coroutine is way too broad a term, and I'm uncomfortable restricting a new keyword (async or await) to what is essentially a Domain Specific Language. -jJ

On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
I'm not sure what you're talking about. Does "next step" refer to something in the current stack frame or something that you're calling? None of the current uses of "yield" (the keyword) in Python are good for lowering priority of something. It's not just the GIL, it's that coroutines (by whatever name) are still single-threaded. If you have something long-running CPU-intensive you should probably run it in a background thread (or process) e.g. using an executor.
The common use case is network I/O. But it's quite possible to integrate coroutines with a UI event loop. -- --Guido van Rossum (python.org/~guido)

On 05/01, Guido van Rossum wrote:
On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
So when a generator is used as an iterator, yield and yield from are used to produce the actual working values... But when a generator is used as a coroutine, yield (and yield from?) are used to provide context about when they should be run again? -- ~Ethan~

On Fri, May 1, 2015 at 12:24 PM, Ethan Furman <ethan@stoneleaf.us> wrote:
The common thing is that the *argument* to yield provides info to whoever/whatever is on the other end, and the *return value* from yield [from] is whatever they returned in response. When using yield to implement an iterator, there is no return value from yield -- the other end is the for-loop that calls __next__, and it just says "give me the next value", and the value passed to yield is that next value. When using yield [from] to implement a coroutine the other end is probably a trampoline or scheduler or multiplexer. The argument to yield [from] tells the scheduler what you are waiting for. The scheduler resumes the coroutine when that value is avaiable. At this point please go read Greg Ewing's tutorial. Seriously. http://www.cosc.canterbury.ac.nz/greg.ewing/python/yield-from/yield_from.htm... Note that when using yield from, there is a third player: the coroutine that contains the "yield from". This is neither the scheduler nor the other thing; the communication between the scheduler and the other thing passes transparently *through* this coroutine. When the other thing has a value for this coroutine, it uses *return* to send it a value. The "other thing" here is a lower-level coroutine -- it could either itself also use yield-from and return, or it could be an "I/O primitive" that actually gives the scheduler a specific instruction (e.g. wait until this socket becomes readable). Please do read Greg's tutorial. -- --Guido van Rossum (python.org/~guido)

On Fri, May 1, 2015 at 2:59 PM, Guido van Rossum <guido@python.org> wrote:
I'm not sure what you're talking about. Does "next step" refer to something in the current stack frame or something that you're calling?
The next piece of your algorithm.
If there are more tasks than executors, yield is a way to release your current executor and go to the back of the line. I'm pretty sure I saw several examples of that style back when coroutines were first discussed. -jJ

On Fri, 1 May 2015 13:10:01 -0700 Guido van Rossum <guido@python.org> wrote:
I think Jim is saying that when you have a non-trivial task running in the event loop, you can "yield" from time to time to give a chance to other events (e.g. network events or timeouts) to be processed timely. Of course, that assumes the event loop will somehow priorize them over the just yielded task. Regards Antoine.

On Fri, May 1, 2015 at 1:22 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
Yeah, but (unlike some frameworks) when using asyncio you can't just put a plain "yield" statement in your code. You'd have to do something like `yield from asyncio.sleep(0)`. -- --Guido van Rossum (python.org/~guido)

On Fri, May 1, 2015 at 4:10 PM, Guido van Rossum <guido@python.org> wrote:
Could you dig up the actual references? It seems rather odd to me to mix coroutines and threads this way.
I can try in a few days, but the primary case (and perhaps the only one with running code) was for n_executors=1. They assumed there would only be a single thread, or at least only one that was really important to the event loop -- the pattern was often described as an alternative to relying on threads. FWIW, Ron Adam's "yielding" in https://mail.python.org/pipermail/python-dev/2015-May/139762.html is in the same spirit. You replied it would be better if that were done by calling some method on the scheduling loop, but that isn't any more standard, and the yielding function is simple enough that it will be reinvented. -jJ

On 29 April 2015 at 18:43, Jim J. Jewett <jimjjewett@gmail.com> wrote:
I agree. While I don't use coroutines/asyncio, and I may never do so, I will say that I find Python's approach very difficult to understand. I'd hope that the point of PEP 492, by making await/async first class language constructs, would be to make async programming more accessible in Python. Whether that will actually be the case isn't particularly clear to me. And whether "async programming" and "coroutines" are the same thing, I'm even less sure of. I haven't really followed the discussions here, because they seem to be about details that are completely confusing to me. In principle, I support the PEP, on the basis that working towards better coroutine/async support in Python seems worthwhile to me. But until the whole area is made more accessible to the average programmer, I doubt any of this will be more than a niche area in Python. For example, the PEP says: """ New Coroutine Declaration Syntax The following new syntax is used to declare a coroutine: async def read_data(db): pass """ Looking at the Wikipedia article on coroutines, I see an example of how a producer/consumer process might be written with coroutines: var q := new queue coroutine produce loop while q is not full create some new items add the items to q yield to consume coroutine consume loop while q is not empty remove some items from q use the items yield to produce (To start everything off, you'd just run "produce"). I can't even see how to relate that to PEP 429 syntax. I'm not allowed to use "yield", so should I use "await consume" in produce (and vice versa)? I'd actually expect to just write 2 generators in Python, and use .send() somehow (it's clunky and I can never remember how to write the calls, but that's OK, it just means that coroutines don't have first-class syntax support in Python). This is totally unrelated to asyncio, which is the core use case for all of Python's async support. But it's what I think of when I see the word "coroutine" (and Wikipedia agrees). Searching for "Async await" gets me to the Microsoft page "Asynchronous Programming with Async and Await" describing the C# keywords. That looks more like what PEP 429 is talking about, but it uses the name "async method". Maybe that's what PEP should do, too, and leave the word "coroutine" for the yielding of control that I quoted from Wikipedia above. Confusedly, Paul

Hi Paul, On 2015-04-29 2:26 PM, Paul Moore wrote:
It will make it more accessible in Python. asyncio is getting a lot of traction, and with this PEP accepted I can see it only becoming easier to work with it (or any other async frameworks that start using the new syntax/protocols).
That Wikipedia page is very generic, and the pseudo-code that it uses does indeed look confusing. Here's how it might look like (this is the same pseudo-code but tailored for PEP 492, not a real something) q = asyncio.Queue(maxsize=100) async def produce(): # you might want to wrap it all in 'while True' while not q.full(): item = create_item() await q.put(item) async def consume(): while not q.empty(): item = await q.get() process_item(item) Thanks! Yury

On 29 April 2015 at 19:42, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
I think the "loop" in the Wikipedia pseudocode was intended to be the "while True" here, not part of the "while" on the next line.
Thanks for that. That does look pretty OK. One question, though - it uses an asyncio Queue. The original code would work just as well with a list, or more accurately, something that wasn't designed for async use. So the translation isn't completely equivalent. Also, can I run the produce/consume just by calling produce()? My impression is that with asyncio I need an event loop - which "traditional" coroutines don't need. Nevertheless, the details aren't so important, it was only a toy example anyway. However, just to make my point precise, here's a more or less direct translation of the Wikipedia code into Python. It doesn't actually work, because getting the right combinations of yield and send stuff is confusing to me. Specifically, I suspect that "yield produce.send(None)" isn't the right way to translate "yield to produce". But it gives the idea. data = [1,2,3,4,5,6,7,8,9,10] q = [] def produce(): while True: while len(q) < 10: if not data: return item = data.pop() print("In produce - got", item) q.append(item) yield consume.send(None) total = 0 def consume(): while True: while q: item = q.pop() print("In consume - handling", item) global total total += item yield produce.send(None) # Prime the coroutines produce = produce() consume = consume() next(produce) print(total) The *only* bits of this that are related to coroutines are: 1. yield consume.send(None) (and the same for produce) 2. produce = produce() (and the same for consume) priming the coroutines 3. next(produce) to start the coroutines I don't think this is at all related to PEP 492 (which is about async) but it's what is traditionally meant by coroutines. It would be nice to have a simpler syntax for these "traditional" coroutines, but it's a very niche requirement, and probably not worth it. But the use of "coroutine" in PEP 492 for the functions introduced by "async def" is confusing - at least to me - because I think of the above, and not of async. Why not just call them "async functions" and leave the term coroutine for the above flow control construct, which is where it originated? But maybe that ship has long sailed - the term "coroutine" is pretty entrenched in the asyncio documentation. If so, then I guess we have to live with the consequences. Paul

Paul, On 2015-04-29 3:19 PM, Paul Moore wrote:
Well, yes. Coroutine is a generic term. And you can use PEP 492 coroutines without asyncio, in fact that's how most tests for the reference implementation is written. Coroutine objects have .send(), .throw() and .close() methods (same as generator objects in Python). You can work with them without a loop, but loop implementations contain a lot of logic to implement the actual cooperative execution. You can use generators as coroutines, and nothing would prevent you from doing that after PEP 492, moreover, for some use-cases it might be quite a good decision. But a lot of the code -- web frameworks, network applications, etc will hugely benefit from the proposal, streamlined syntax and async for/with statements. [..]
Everybody is pulling me in a different direction :) Guido proposed to call them "native coroutines". Some people think that "async functions" is a better name. Greg loves his "cofunction" term. I'm flexible about how we name 'async def' functions. I like to call them "coroutines", because that's what they are, and that's how asyncio calls them. It's also convenient to use 'coroutine-object' to explain what is the result of calling a coroutine. Anyways, I'd be OK to start using a new term, if "coroutine" is confusing. Thanks, Yury

On Wed, Apr 29, 2015 at 2:42 PM, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
Anyways, I'd be OK to start using a new term, if "coroutine" is confusing.
According to Wikipedia <http://en.wikipedia.org/wiki/Coroutine>, term "coroutine" was first coined in 1958, so several generations of computer science graduates will be familiar with the textbook definition. If your use of "coroutine" matches the textbook definition of the term, I think you should continue to use it instead of inventing new names which will just confuse people new to Python. Skip

On Wed, Apr 29, 2015 at 1:14 PM, Skip Montanaro <skip.montanaro@gmail.com> wrote:
IIUC the problem is that Python has or will have a number of different things that count as coroutines by that classic CS definition, including generators, "async def" functions, and in general any object that implements the same set of methods as one or both of these objects, or possibly inherits from a certain abstract base class. It would be useful to have some terms to refer specifically to async def functions and the await protocol as opposed to generators and the iterator protocol, and "coroutine" does not make this distinction. -n -- Nathaniel J. Smith -- http://vorpus.org

Maybe it would help to refer to PEP 342, which first formally introduced the concept of coroutines (as a specific use case of generators) in Python. Personally I don't care too much which term the PEP uses, as logn as it defines its terms. The motivation is already clear to me; it's the details that I care about before approving this PEP. On Wed, Apr 29, 2015 at 1:19 PM, Nathaniel Smith <njs@pobox.com> wrote:
-- --Guido van Rossum (python.org/~guido)

Skip Montanaro wrote:
I don't think anything in asyncio or PEP 492 fits that definition directly. Generators and async def functions seem to be what that page calls a "generator" or "semicoroutine": they differ in that coroutines can control where execution continues after they yield, while generators cannot, instead transferring control back to the generator's caller. -- Greg

Hello, On Thu, 30 Apr 2015 18:53:00 +1200 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
But of course it's only a Wikipedia page, which doesn't mean it has to provide complete and well-defined picture, and quality of some (important) Wikipedia pages is indeed pretty poor and doesn't improve. -- Best regards, Paul mailto:pmiscml@gmail.com

On 29 April 2015 at 20:42, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
Everybody is pulling me in a different direction :)
Sorry :-)
If it helps, ignore my opinion - I'm not a heavy user of coroutines or asyncio, so my view shouldn't have too much weight. Thanks for your response - my question was a little off-topic, but your reply has made things clearer for me. Paul

On 29 April 2015 at 20:42, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
I'd like the object created by an 'async def' statement to be called a 'coroutine function' and the result of calling it to be called a 'coroutine'. This is consistent with the usage of 'generator function' and 'generator' has two advantages IMO: - they both would follow the pattern 'X function' is a function statement that when called returns an 'X'. - When the day comes to define generator coroutines, then it will be clear what to call them: 'generator coroutine function' will be the function definition and 'generator coroutine' will be the object it creates. Cheers, -- Arnaud

On 30 April 2015 at 09:50, Arnaud Delobelle <arnodel@gmail.com> wrote:
That would be an improvement over the confusing terminology in the PEP atm. The PEP proposes to name the inspect functions inspect.iscoroutine() and inspect.iscoroutinefunction(). According to the PEP iscoroutine() identifies "coroutine objects" and iscoroutinefunction() identifies "coroutine functions" -- a term which is not defined in the PEP but presumably means what the PEP calls a "coroutine" in the glossary. Calling the async def function an "async function" and the object it returns a "coroutine" makes for the clearest terminology IMO (provided the word coroutine is not also used for anything else). It would help to prevent both experienced and new users from confusing the two related but necessarily distinct concepts. Clearly distinct terminology makes it easier to explain/discuss something if nothing else because it saves repeating definitions all the time. -- Oscar

Hi Oscar, I've updated the PEP with some fixes of the terminology: https://hg.python.org/peps/rev/f156b272f860 I still think that 'coroutine functions' and 'coroutines' is a better pair than 'async functions' and 'coroutines'. First, it's similar to existing terminology for generators. Second, it's less confusing. With pep492 at some point, using generators to implement coroutines won't be a wide spread practice, so 'async def' functions will be the only language construct that returns them. Yury On 2015-05-05 12:01 PM, Oscar Benjamin wrote:

On 5 May 2015 at 17:48, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
I've updated the PEP with some fixes of the terminology: https://hg.python.org/peps/rev/f156b272f860
Yes that looks better.
I still think that 'coroutine functions' and 'coroutines' is a better pair than 'async functions' and 'coroutines'.
Fair enough. The terminology in the PEP seems consistent now which is more important than the exact terms used. -- Oscar

Hello, On Wed, 29 Apr 2015 20:19:40 +0100 Paul Moore <p.f.moore@gmail.com> wrote: []
All this confusion stems from the fact that wikipedia article fails to clearly provide classification dichotomies for coroutines. I suggest reading Lua coroutine description as much better attempt at classification: http://www.lua.org/pil/9.1.html . It for example explicit at mentioning common pitfall: "Some people call asymmetric coroutine semi-coroutines (because they are not symmetrical, they are not really co). However, other people use the same term semi-coroutine to denote a restricted implementation of coroutines". Comparing that to wikipedia article, you'll notice that it uses "semicoroutine" in just one of a sense, and well, different people use "semi" part of a different classification axis. So, trying to draw a table from Lua's text, there're following 2 axes: Axis 1: Symmetric vs Asymmetric Asymmetric coroutines use 2 control flow constructs, akin to subroutine call and return. (Names vary, return is usually called yield.) Symmetric use only one. You can think of symmetric coroutines only call or only return, though more un-confusing term is "switch to". Axis 2: "Lexical" vs "Dynamic" Naming less standardized. Lua calls its coroutines "tru", while other - "generators". Others call them "coroutines" vs "generators". But the real difference is intuitively akin of lexical vs dynamic scoping. "Lexical" coroutines require explicit marking of each (including recursive) call to a coroutine. "Dynamic" do not - you can call a normally looking function, and it suddenly pass control to somewhere else (another coroutine), about which fact you don't have a clue. All *four* recombined types above are coroutines, albeit all with slightly different properties. Symmetric dynamic coroutines are the most powerful type - as powerful as an abyss. They are what is usually used to frighten the innocent. Wikipedia shows you example of them. No sane real-world language uses symmetric coroutines - they're not useful without continuations, and sane real-world people don't want to manage continuations manually. Python, Lua, C# use asymmetric coroutines. Python and C# use asymmetric "lexical" coroutines - the simplest, and thus safest type, but which has limitations wrt to doing mind-boggling things. Lua has "dynamic" asymmetric coroutines - more powerful, and thus more dangerous type (you want to look with jaundiced eye at that guy's framework based on "dynamic" coroutines - you'd better rewrite it from scratch before you trust it). -- Best regards, Paul mailto:pmiscml@gmail.com

Paul Moore wrote:
The Pythonic way to do things like that is to write the producer as a generator, and the consumer as a loop that iterates over it. Or the consumer as a generator, and the producer as a loop that send()s things into it. To do it symmetrically, you would need to write them both as generators (or async def functions or whatever) plus a mini event loop to tie the two together. -- Greg

On 29 April 2015 at 20:19, Paul Moore <p.f.moore@gmail.com> wrote:
Hmm, when I try to fix this "minor" (as I thought!) issue with my code, I discover it's more fundamental. The error I get is Traceback (most recent call last): File ".\coro.py", line 28, in <module> next(produce) File ".\coro.py", line 13, in produce yield consume.send(None) File ".\coro.py", line 23, in consume yield produce.send(None) ValueError: generator already executing What I now realise that means is that you cannot have producer send to consumer which then sends back to producer. That's what the "generator already executing" message means. This is fundamentally different from the "traditional" use of coroutines as described in the Wikipedia article, and as I thought was implemented in Python. The Wikipedia example allows two coroutines to freely yield between each other. Python, on the other hand, does not support this - it requires the mediation of some form of "trampoline" controller (or event loop, in asyncio terms) to redirect control. [1] This limitation of Python's coroutines is not mentioned anywhere in PEP 342, and that's probably why I never really understood Python coroutines properly, as my mental model didn't match the implementation. Given that any non-trivial use of coroutines in Python requires an event loop / trampoline, I begin to understand the logic behind asyncio and this PEP a little better. I'm a long way behind in understanding the details, but at least I'm no longer completely baffled. Somewhere, there should be an explanation of the difference between Python's coroutines and Wikipedia's - I can't be the only person to be confused like this. But I don't think there's any docs covering "coroutines in Python" outside of PEP 342 - the docs just cover the components (the send and throw methods, the yield expression, etc). Maybe it could be covered in the send documentation (as that's what gives the "generator already executing" error. I'll try to work up a doc patch. Actually, looking at the docs, I can't even *find* where the behaviour of the send method is defined - can someone point me in the right direction? Paul [1] It's sort of similar to how Python doesn't do tail call elimination. Symmetric yields rely on stack frames that are no longer needed being discarded if they are to avoid unlimited recursion, so to have symmetric yields, Python would need a form of tail call ("tail yield", I guess :-)) elimination.

Paul Moore wrote:
I agree. While I don't use coroutines/asyncio, and I may never do so, I will say that I find Python's approach very difficult to understand.
Well, I tried to offer something easier to understand. The idea behind PEP 3152 is that writing async code should be just like writing threaded code, except that the suspension points are explicit. But apparently that was too simple, or something.
Aaargh, this is what we get for overloading the word "coroutine". The Wikipedia article is talking about a technique where coroutines yield control to other explicitly identified coroutines. Coroutines in asyncio don't work that way; instead they just suspend themselves, and the event loop takes care of deciding which one to run next.
I can't even see how to relate that to PEP 429 syntax. I'm not allowed to use "yield",
You probably wouldn't need to explicitly yield, since you'd use an asyncio.Queue for passing data between the tasks, which takes care of suspending until data becomes available. You would only need to yield if you were implementing some new synchronisation primitive. Yury's answer to that appears to be that you don't do it with an async def function, you create an object that implements the awaitable-object protocol directly. -- Greg

On 30 April 2015 at 06:39, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yep, I understand that. It's just that that's what I understand by coroutines.
Precisely. As I say, the terminology is probably not going to change now - no big deal in practice. Paul

On Wed, Apr 29, 2015 at 2:26 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 29 April 2015 at 18:43, Jim J. Jewett <jimjjewett@gmail.com> wrote:
So? PEP 492 never says what coroutines *are* in a way that explains why it matters that they are different from generators.
...
I think so ... but the fact that nothing is actually coming via the await channel makes it awkward. I also worry that it would end up with an infinite stack depth, unless the await were actually replaced with some sort of framework-specific scheduling primitive, or one of them were rewritten differently to ensure it returned to the other instead of calling it anew. I suspect the real problem is that the PEP is really only concerned with a very specific subtype of coroutine, and these don't quite fit. (Though it could be done by somehow making them both await on the queue status, instead of on each other.) -jJ

On Thu, Apr 30, 2015 at 10:24 AM, Jim J. Jewett <jimjjewett@gmail.com> wrote:
I suspect the real problem is that the PEP is really only concerned with a very specific subtype of coroutine, and these don't quite fit.
That's correct. The PEP is concerned with the existing notion of coroutines in Python, which was first introduced by PEP 342: Coroutines via Enhanced Generators. The Wikipedia definition of coroutine (which IIRC is due to Knuth) is quite different and nobody who actually uses the coding style introduced by PEP 342 should mistake one for the other. This same notion of "Pythonic" (so to speak) coroutines was refined by PEP 380, which introduced yield from. It was then *used* in PEP 3156 (the asyncio package) for the specific purpose of standardizing a way to do I/O multiplexing using an event loop. The basic premise of using coroutines with the asyncio package is that most of the time you can write *almost* sequential code as long as you insert "yield from" in front of all blocking operations (and as long as you use blocking operations that are implemented by or on top of the asyncio package). This makes the code easier to follow than code written "traditional" event-loop-based I/O multiplexing (which is heavy on callbacks, or callback-like abstractions like Twisted's Deferred). However, heavy users of the asyncio package (like Yury) discovered some common patterns when using coroutines that were awkward. In particular, "yield from" is quite a mouthful, the coroutine version of a for-loop is awkward, and a with-statement can't have a blocking operation in __exit__ (because there's no explicit yield opcode). PEP 492 proposes a quite simple and elegant solution for these issues. Most of the technical discussion about the PEP is on getting the details right so that users won't have to worry about them, and can instead just continue to write *almost* sequential code when using the asyncio package (or some other framework that offers an event loop integrated with coroutines). -- --Guido van Rossum (python.org/~guido)

Literary critic here. In section "Specification"
The usual phrasing of "strongly suggested" in specifications is "presumes knowledge". Some people think "strongly suggest <do>ing" is presumptuous and condescending, YMMV. Also, the relationship to PEP 3152 should be mentioned IMO. I propose: This specification presumes knowledge of the implementation of coroutines in Python (PEP 342 and PEP 380). Motivation for the syntax changes proposed here comes from the asyncio framework (PEP 3156) and the "Cofunctions" proposal (PEP 3152, now rejected in favor of this specification). I'm not entirely happy with my phrasing, because there are at least four more or less different concepts that might claim the bare word "coroutine": - this specification - the implementation of this specification - the syntax used to define coroutines via PEPs 342 and 380 - the semantics of PEP 342/380 coroutines In both your original and my rephrasing, the use of "coroutine" violates your convention that it refers to the PEP's proposed syntax for coroutines. Instead it refers to the semantics of coroutines implemented via PEP 342/380. This is probably the same concern that motivated Guido's suggestion to use "native coroutines" for the PEP 492 syntax (but I'm not Dutch, so maybe they're not the same :-). I feel this is a real hindrance to understanding for someone coming to the PEP for the first time. You know which meaning of coroutine you mean, but the new reader needs to think hard enough to disambiguate every time the word occurs. If people agree with me, I could go through the PEP and revise mentions of "coroutine" in "disambiguated" style. In section "Comprehensions":
Don't invite trouble.<wink /> How about: Syntax for asynchronous comprehensions could be provided, but this construct is outside of the scope of this PEP. In section "Async lambdas"
Same recommendation as for "Comprehensions". I wouldn't mention the tentative syntax, it is both obvious and inviting to trouble.
A partial list of commentators I've found to be notable, YMMV: Greg Ewing for PEP 3152 and his Loyal Opposition to this PEP. Mark Shannon's comments have led to substantial clarifications of motivation for syntax, at least in my mind. Paul Sokolovsky for information about the MicroPython implementation.

Hi Stephen, Thanks a lot for the feedback and suggestions. I'll apply them to the PEP. On 2015-04-28 11:03 PM, Stephen J. Turnbull wrote:
Your wording is 100% better and it's time to mention PEP 3152 too.
I also like Guido's suggestion to use "native coroutine" term. I'll update the PEP (I have several branches of it in the repo that I need to merge before the rename).
Agree. Do you think it'd be better to combine comprehensions and async lambdas in one section?
Sure! I was going to add everybody after the PEP is accepted/rejected/postponed.
Thanks! Yury

Yury Selivanov wrote:
I'd still prefer to avoid use of the word "coroutine" altogether as being far too overloaded. I think even the term "native coroutine" leaves room for ambiguity. It's not clear to me whether you intend it to refer only to functions declared with 'async def', or to any function that returns an awaitable object. The term "async function" seems like a clear and unabmigious way to refer to the former. I'm not sure what to call the latter. -- Greg
participants (16)
-
Antoine Pitrou
-
Arnaud Delobelle
-
Ethan Furman
-
Greg
-
Greg Ewing
-
Guido van Rossum
-
Jim J. Jewett
-
Nathaniel Smith
-
Oscar Benjamin
-
Paul Moore
-
Paul Sokolovsky
-
Skip Montanaro
-
Stefan Behnel
-
Stephen J. Turnbull
-
Walter Dörwald
-
Yury Selivanov