Are there asynchronous generators?

Hello, I had a generator producing pairs of values and wanted to feed all the first members of the pairs to one consumer and all the second members to another consumer. For example: def pairs(): for i in range(4): yield (i, i ** 2) biconsumer(sum, list)(pairs()) -> (6, [0, 1, 4, 9]) The point is I wanted the consumers to be suspended and resumed in a coordinated manner: The first producer is invoked, it wants the first element. The coordinator implemented by biconsumer function invokes pairs(), gets the first pair and yields its first member to the first consumer. Then it wants the next element, but now it's the second consumer's turn, so the first consumer is suspended and the second consumer is invoked and fed with the second member of the first pair. Then the second producer wants the next element, but it's the first consumer's turn… and so on. In the end, when the stream of pairs is exhausted, StopIteration is thrown to both consumers and their results are combined. The cooperative asynchronous nature of the execution reminded me asyncio and coroutines, so I thought that biconsumer may be implemented using them. However, it seems that it is imposible to write an "asynchronous generator" since the "yielding pipe" is already used for the communication with the scheduler. And even if it was possible to make an asynchronous generator, it is not clear how to feed it to a synchronous consumer like sum() or list() function. With PEP 492 the concepts of generators and coroutines were separated, so asyncronous generators may be possible in theory. An ordinary function has just the returning pipe – for returning the result to the caller. A generator has also a yielding pipe – used for yielding the values during iteration, and its return pipe is used to finish the iteration. A native coroutine has a returning pipe – to return the result to a caller just like an ordinary function, and also an async pipe – used for communication with a scheduler and execution suspension. An asynchronous generator would just have both yieling pipe and async pipe. So my question is: was the code like the following considered? Does it make sense? Or are there not enough uses cases for such code? I found only a short mention in https://www.python.org/dev/peps/pep-0492/#coroutine-generators, so possibly these coroutine-generators are the same idea. async def f(): number_string = await fetch_data() for n in number_string.split(): yield int(n) async def g(): result = async/await? sum(f()) return result async def h(): the_sum = await g() As for explanation about the execution of h() by an event loop: h is a native coroutine called by the event loop, having both returning pipe and async pipe. The returning pipe leads to the end of the task, the async pipe is used for cummunication with the scheduler. Then, g() is called asynchronously – using the await keyword means the the access to the async pipe is given to the callee. Then g() invokes the asyncronous generator f() and gives it the access to its async pipe, so when f() is yielding values to sum, it can also yield a future to the scheduler via the async pipe and suspend the whole task. Regards, Adam Bartoš

In my experience, it's much easier to use asyncio Queues for this. Instead of yielding, push to a queue. The consumer can then use "await queue.get()". I think the semantics of the generator become too complicated otherwise, or maybe impossible. Maybe have a look at this article: http://www.interact-sw.co.uk/iangblog/2013/11/29/async-yield-return Jonathan 2015-06-24 12:13 GMT+02:00 Andrew Svetlov <andrew.svetlov@gmail.com>:

Is there a way for a producer to say that there will be no more items put, so consumers get something like StopIteration when there are no more items left afterwards? There is also the problem that one cannot easily feed a queue, asynchronous generator, or any asynchronous iterator to a simple synchronous consumer like sum() or list() or "".join(). It would be nice if there was a way to wrap them to asynchronous ones when needed – something like (async sum)(asynchronously_produced_numbers()). On Wed, Jun 24, 2015 at 1:54 PM, Jonathan Slenders <jonathan@slenders.be> wrote:

I afraid the last will never possible -- you cannot push async coroutines into synchronous convention call. Your example should be converted into `await async_sum(asynchronously_produced_numbers())` which is possible right now. (asynchronously_produced_numbers should be *iterator* with __aiter__/__anext__ methods, not generator with yield expressions inside. On Sun, Jun 28, 2015 at 1:02 PM, Adam Bartoš <drekin@gmail.com> wrote:
-- Thanks, Andrew Svetlov

I understand that it's impossible today, but I thought that if asynchronous generators were going to be added, some kind of generalized generator mechanism allowing yielding to multiple different places would be needed anyway. So in theory no special change to synchronous consumers would be needed – when the asynschronous generator object is created, it gets a link to the scheduler from the caller, then it's given as an argument to sum(); when sum wants next item it calls next() and the asynchronous generator can either yield the next value to sum or it can yield a future to the scheduler and suspend execution of whole task. But since it's a good idea to be explicit and mark each asyncronous call, some wrapper like (async sum) would be used. On Sun, Jun 28, 2015 at 12:07 PM, Andrew Svetlov <andrew.svetlov@gmail.com> wrote:

[Fixing the messed-up reply quoting order] Adam Bartoš schrieb am 28.06.2015 um 12:30:
Stackless might eventually support something like that. That being said, note that by design, the scheduler (or I/O loop, if that's what you're using) always lives *outside* of the whole asynchronous call chain, at its very end, but can otherwise be controlled by arbitrary code itself, and that is usually synchronous code. In your example, it could simply be moved between the first async function and its synchronous consumer ("sum" in your example). Doing that is entirely possible. What is not possible (unless you're using a design like Stackless) is that this scheduler controls its own controller, e.g. that it starts interrupting the execution of the synchronous code that called it. Stefan

Hello, On Sun, 28 Jun 2015 12:02:01 +0200 Adam Bartoš <drekin@gmail.com> wrote:
Sure, just designate sentinel value of your likes (StopIteration class value seems an obvious choice) and use it for that purpose.
All that is easily achievable with classical Python coroutines, not with asyncio garden variety of coroutines, which lately were casted into a language level with async/await disablers: def coro1(): yield 1 yield 2 yield 3 def coro2(): yield from coro1() yield 4 yield 5 print(sum(coro2())) And back to your starter question, it's also possible - and also only with classical Python coroutines. I mentioned not just possibility, but necessity of that in my independent "reverse engineering" of how yield from works https://dl.dropboxusercontent.com/u/44884329/yield-from.pdf (point 9 there). That's simplistic presentation, and in the presence of "syscall main loop", example there would need to be: class MyValueWrapper: def __init__(self, v): self.v = v def pump(ins, outs): for chunk in gen(ins): if isinstance(chunk, MyValueWrapper): # if value we got from a coro is of # type we expect, process it yield from outs.write(chunk.v) else: # anything else is simply not for us, # re-yield it to higher levels (ultimately, mainloop) yield chunk def gen(ins): yield MyValueWrapper("<b>") # Assume read_in_chunks() already yields MyValueWrapper objects yield from ins.read_in_chunks(1000*1000*1000) yield MyValueWrapper("</b>")
-- Best regards, Paul mailto:pmiscml@gmail.com

On 2015-06-28 11:52 AM, Paul Sokolovsky wrote:
You have easily achieved combining two generators with 'yield from' and feeding that to 'sum' builtin. There is no way to combine synchronous loops with asynchronous coroutines; by definition, the entire process will block while you are iterating trough them. Yury

Hello, On Sun, 28 Jun 2015 18:14:20 -0400 Yury Selivanov <yselivanov.ml@gmail.com> wrote:
Right, the point here was that PEP492, banning usage of "yield" in coroutines, doesn't help with such simple and basic usage of them. And then I again can say what I said during initial discussion of PEP492: I have dual feeling about it: promise of making coroutines easier and more user friendly is worth all support, but step of limiting basic language usage in them doesn't seem good. What me and other people can do then is just trust that you guys know what you do and PEP492 will be just first step. But bottom line is that I personally don't find async/await worthy to use for now - it's better to stick to old good yield from, until the promise of truly better coroutines is delivered.
Indeed, to solve this issue, it requires to use "inversion of inversion of control" pattern. Typical real-world example is that someone has got their (unwise) main loop and wants us to do callback mess programming with it, but we don't want them to call us, we want to call them, at controlled intervals, to do controlled amount of work. The solution would be to pass a callback which looks like a normal function, but which is actually a coroutine. Then foreign main loop, calling it, would suspend it and pass control to "us", and us can let another iteration of foreign main loop by resuming that coroutine. The essence of this approach lies in having a coroutine "look like" a usual function, or more exactly, in being able to resume a coroutine from a context of normal function. And that's explicitly not what Python coroutines are - they require lexical marking of each site where coroutine suspension may happen (for good reasons which were described here on the list many times). During previous phase of discussion, I gave classification of different types of coroutines to graps/structure all this stuff better: http://code.activestate.com/lists/python-dev/136046/ -- Best regards, Paul mailto:pmiscml@gmail.com

On 29 June 2015 at 16:44, Paul Sokolovsky <pmiscml@gmail.com> wrote:
The purpose of PEP 492 is to fundamentally split the asynchronous IO use case away from traditional generators. If you're using native coroutines, you MUST have an event loop, or at least be using something like asyncio.run_until_complete() (which spins up a scheduler for the duration). If you're using generators without @types.coroutine or @asyncio.coroutine (or the equivalent for tulip, Twisted, etc), then you're expecting a synchronous driver rather than an asynchronous one. This isn't an accident, or something that will change at some point in the future, it's the entire point of the exercise: having it be obvious both how you're meant to interact with something based on the way it's defined, and how you factor outside subcomponents of the algorithm. Asynchronous driver? Use a coroutine. Synchronous driver? Use a generator. What we *don't* have are consumption functions that have an implied "async for" inside them - functions like sum(), any(), all(), etc are all synchronous drivers. The other key thing we don't have yet? Asynchronous comprehensions. A peak at the various options for parallel execution described in https://docs.python.org/3/library/concurrent.futures.html documentation helps illustrate why: once we're talking about applying reduction functions to asynchronous iterables we're getting into full-blown language-level-support-for-MapReduce territory. Do the substeps still need to be executed in series? Or can the substeps be executed in parallel, and either accumulated in iteration order or as they become available? Does it perhaps make sense to *require* that the steps be executable in parallel, such that we could write the following: result = sum(x*x for async x in coro) Where the reduction step remains synchronous, but we can mark the comprehension/map step as asynchronous, and have that change the generated code to create an implied lambda for the "lambda x: x*x" calculation, dispatch all of those to the scheduler at once, and then produce the results one at a time? The answer to that is "quite possibly, but we don't really know yet". PEP 492 is enough to address some major comprehensibility challenges that exist around generators-as-coroutines. It *doesn't* bring language level support for parallel MapReduce to Python, but it *does* bring some interesting new building blocks for folks to play around with in that regard (in particular, figuring out what we want the comprehension level semantics of "async for" to be). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Not following this in detail, but want to note that async isn't a good model for parallelization (except I/O) because the expectation of coroutines is single threading. The event loop serializes callbacks. Changing this would break expectations and code. On Jun 29, 2015 10:33 AM, "Nick Coghlan" <ncoghlan@gmail.com> wrote:

On 29 Jun 2015 7:33 pm, "Guido van Rossum" <guido@python.org> wrote:
Not following this in detail, but want to note that async isn't a good
model for parallelization (except I/O) because the expectation of coroutines is single threading. The event loop serializes callbacks. Changing this would break expectations and code. Yeah, it's a bad idea - I realised after reading your post that because submission for scheduling and waiting for a result can already be separated it should be possible in Py 3.5 to write a "parallel" asynchronous iterator that eagerly consumes the awaitables produced by another asynchronous iterator, schedules them all, then produces the awaitables in order. (That idea is probably as clear as mud without code to show what I mean...) Regards, Nick.

On 06/29/2015 07:23 AM, Nick Coghlan wrote:
Only the parts concerning "schedules them all", and "produces awaitables in order". ;-) Async IO is mainly about recapturing idle cpu time while waiting for relatively slow io. But it could also be a way to organise asynchronous code. In the earlier example with circles, and each object having it's own thread... And that running into the thousands, it can be rearranged a bit if each scheduler has it's own thread. Then objects can be assigned to schedulers instead of threads. (or something like that.) Of course that's still clear as mud at this point, but maybe a different colour of mud. ;-) Cheers, Ron

On 30 June 2015 at 07:51, Ron Adam <ron3200@gmail.com> wrote:
Some completely untested conceptual code that may not even compile, let alone run, but hopefully conveys what I mean better than English does: def get_awaitables(self, async_iterable): """Gets a list of awaitables from an asynchronous iterator""" asynciter = async_iterable.__aiter__() awaitables = [] while True: try: awaitables.append(asynciter.__anext__()) except StopAsyncIteration: break return awaitables async def wait_for_result(awaitable): """Simple coroutine to wait for a single result""" return await awaitable def iter_coroutines(async_iterable): """Produces coroutines to wait for each result from an asynchronous iterator""" for awaitable in get_awaitables(async_iterable): yield wait_for_result(awaitable) def iter_tasks(async_iterable, eventloop=None): """Schedules event loop tasks to wait for each result from an asynchronous iterator""" if eventloop is None: eventloop = asyncio.get_event_loop() for coroutine in iter_coroutines(async_iterable): yield eventloop.create_task(coroutine) class aiter_parallel: """Asynchronous iterator to wait for several asynchronous operations in parallel""" def __init__(self, async_iterable): # Concurrent evaluation of future results is launched immediately self._tasks = tasks = list(iter_tasks(async_iterable)) self._taskiter = iter(tasks) def __aiter__(self): return self def __anext__(self): try: return next(self._taskiter) except StopIteration: raise StopAsyncIteration # Example reduction function async def sum_async(async_iterable, start=0): tally = start async for x in aiter_parallel(async_iterable): tally += x return x # Parallel sum from synchronous code: result = asyncio.get_event_loop().run_until_complete(sum_async(async_iterable)) # Parallel sum from asynchronous code: result = await sum_async(async_iterable)) As the definition of "aiter_parallel" shows, we don't offer any nice syntactic sugar for defining asynchronous iterators yet (hence the question that started this thread). Hopefully the above helps illustrate the complexity hidden behind such a deceptively simple question :) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 06/30/2015 12:08 AM, Nick Coghlan wrote:
On 30 June 2015 at 07:51, Ron Adam <ron3200@gmail.com> wrote:
On 06/29/2015 07:23 AM, Nick Coghlan wrote:
It seems (to me) like there are more layers here than needed. I suppose since this is a higher order functionality, it may be the nature of it. <shrug>
While browsing the asyncio module, I decided to take a look at the multiprocessing module... from multiprocessing import Pool def async_map(fn, args): with Pool(processes=4) as pool: yield from pool.starmap(fn, args) def add(a, b): return a + b values = [(1, 2), (3, 4), (5, 6), (7, 8), (9, 10)] print(sum(async_map(add, values))) #---> 55 That's really very nice. Are there advantages to asyncio over the multiprocessing module? Cheers, Ron

On 1 July 2015 at 14:25, Ron Adam <ron3200@gmail.com> wrote:
That's really very nice. Are there advantages to asyncio over the multiprocessing module?
I find it most useful to conceive of asyncio as an implementation of an "event driven programming" development paradigm. This means that after starting out with imperative procedural programming in Python, you can branch out into other more advanced complexity management models like object-oriented programming (class statements, dunder protocols, descriptors, metaclasses, type hinting), functional programming (comprehensions, generators, decorators, closures, functools, itertools), array oriented programming (memoryview, __matmul__, NumPy, SciPy), and event driven programming (asyncio, Twisted, Tornado). The stark difference between event driven programming and the first three alternate development models I noted is that you can readily implement the notions of "imperative shell, OO core", "imperative shell, functional core", and "imperative shell, array oriented core", where you expose a regular procedural API to other code, and implement it internally using whichever approach makes the most sense for your particular component. Even generators follow this same basic notion of having a clear "start of iteration" and "end of iteration". The concurrent execution model that most readily aligns with this "imperative shell" approach is concurrent.futures (https://docs.python.org/3/library/concurrent.futures.html) - it's designed to let you easily take particular input->output operations and dispatch them for execution in parallel in separate threads or processes. By contrast, event driven programming fundamentally changes your execution model from "I will accept inputs at the beginning of the program, and produce outputs at the end of the program" to "I will start waiting for events, responding to them as they arrive, until one of them indicates I should cease operation". "Waiting for an event" becomes a core development concept, as now indicated by the "await" keyword in PEP 492. The "async" keyword in that same PEP indicates that the marked construct may need to wait for events as part of its operation (async def, async for, async with), but exactly *where* those wait points are depends on the specific construct (await expressions in the function body for async def, protocol method invocations for async for and async with). For the design of asyncio (and similar frameworks) to make any sense at all, it's necessary to approach them with that "event driven programming" mindset - they seem entirely nonsensical when approached with an inputs -> outputs algorithmic mindset, but more logical when considered from a "receive request -> send other requests -> receive responses -> send response" perspective. For folks that primarily deal with algorithmic problems where inputs are converted to outputs, the event driven model addresses a kind of problem that *they don't have*, so it can seem entirely pointless. However, there really are significant categories of problems (such as network service development) where the event driven model is a genuinely superior design tool. Like array oriented programming (and even object-oriented and functional programming), the benefits can unfortunately be hard to explain to folks that haven't personally experienced the problems these tools address, so folks end up having to take it on faith that we're applying the "Complex is better than complicated" line from the Zen of Python when introducing new modelling techniques into the core language. Regards, Nick. P.S. It *is* possible to implement the "imperative shell, event driven core" model, but it requires a synchronous-to-asynchronous adapter like gevent, or an event-loop-per-thread model and extensive use of "run_until_complete()". It's much more complex than "just use concurrent.futures". P.P.S. Credit to Gary Bernhardt for the "imperative shell, <X> core" phrasing for low coupling component design where the external API design is distinct from the internal architectural design -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 07/01/2015 01:56 AM, Nick Coghlan wrote:
It makes since to think of events as IO. Then on top of that, have some sort of dispatch mechanism, which could be objective, imperative, or functional. A robot control program would be a good practical example to use to test these things. I think it's a good direction for python to go in. They need to process multiple sensor inputs, and control multiple external devices all at the same time. I think anything that makes that easier will be good. (It is the future.)
I'm not sure what a event driven core is exactly. It seems to me it would be an event driven (functional, objective, imperative) core. The closest thing I can think of that wouldn't be one of those would be a neural net. Of course it may also be a matter of how we choose to think of things. It's quite possible to have many layers of imperative, functional, and objective, on top of each other. Then we need to indicate the part of the program we are referring to as being X shell/Y core. Cheers, Ron

On 1 July 2015 at 06:56, Nick Coghlan <ncoghlan@gmail.com> wrote:
Hmm, I see what you're getting at here, but my "event driven model" background is with GUI event loops, not with event driven IO. The async/await approach still gives me problems, because I can't map the details of the approach onto the problem domain I'm familiar with. What I can't quite work out is whether that's simply because asyncio is fundamentally designed around the IO problem (the module name suggests that might be the case, but a lot of the module content around tasks, etc, doesn't seem to be), and so doesn't offer any sort of mental model for understanding how GUI event loop code based on async/await would work, or if it's because the async/await design genuinely doesn't map well onto GUI event loop problems. I've been poking in the background at trying to decouple the "IO" aspects of asyncio from the "general" ones, but honestly, I'm not getting very far yet. I think what I need to do is to work out how to write a GUI event loop that drives async/await style coroutines, and see if that helps me understand the model better. But there aren't many examples of event loops around to work from (the asyncio one is pretty complex, and it's hard to know how much of that complexity is needed, and how much is because it was developed before async/await were available). So while I agree that if you don't need an event driven model, it can seem like pointless complexity, I *also* think that the pure callback approach to event driven code is what feels "obvious" to most people. It's maybe not the easiest model to code with, but it is the easiest one to think about - and mentally making the link between callbacks and async/await isn't straightforward. So even though people can understand event-driven problems, they can't, without experience, see how async/await *addresses* that problem. Paul

On 2 July 2015 at 19:57, Paul Moore <p.f.moore@gmail.com> wrote:
If an operation doesn't need to wait for IO itself, then it can respond immediately using a normal callback (just as a generator is useful for implementing iterators, but would be pointless for a normal function call). async/await is more useful for multi-step processes, and for persistent monitoring of a data source in an infinite loop (e.g. listening for push notifications from a server process). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 07/02/2015 05:57 AM, Paul Moore wrote:
Yes, I think there are some parts to it that are difficult to understand still. That could be a documentation thing. Consider a routine roughly organised like this: event_loop: item_loop1: action1 <-- wait for event1 item_loop2: action2 <-- wait for event2 other_things_loop: ... sleep # continue event_loop It's not clear to me how to write that with asyncio yet. But I don't think I'm alone. Cheers, Ron

One way for a GUI developer to get the hang of the point of asyncio is to imagine this problem: I've written a synchronous, modal, menu-driven app. I want to turn that into a GUI app, maybe using a wizard-like design, with a pane for each menu. To do that, I basically have to turn my control-flow inside-out. But if, at each step, I could just put up the next pane and "await" the user's response, my code would look much the same as the original CLI code. And if I wanted to let the user fire off multiple "wizards" in parallel (an MDI interface, treating each one as a document), it would just work, because each wizard is a separate coroutine that spends most of its time blocked on the event loop. The difference between a server and an MDI app is that you usually need hundreds or thousands of connections as opposed to a handful of documents, but the control flow for each is usually more linear, so the wizard-like design is a much more obvious choice.

On 3 July 2015 at 13:16, Andrew Barnert <abarnert@yahoo.com> wrote:
The difference between a server and an MDI app is that you usually need hundreds or thousands of connections as opposed to a handful of documents, but the control flow for each is usually more linear, so the wizard-like design is a much more obvious choice.
Ah, thank you - yes, the "stepping through a wizard" case is a good example, as it hits the same kind multi-step process that causes problems with network applications. Simple request-response cases are easy to handle with callbacks: "event A happens, invoke callback B, which will trigger action C". If things stop there, you're fine. Things get messier when they start looking like this: "event A happens, invoking callback B, which triggers action C after setting up callback D to wait for event E, which triggers action F after setting up callback G to wait for event H and finally trigger action I" This is where coroutines help, as that second case instead becomes: "event A happens, invoking coroutine B, which triggers action C, then waits for event E, then triggers action F, then waits for event H, then triggers the final action I" Rather than having to create a new callback to handle each new action-event pair, you can instead have a single coroutine which triggers an action, and then waits for the corresponding event, and may do this multiple times before terminating. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 29 June 2015 at 09:32, Nick Coghlan <ncoghlan@gmail.com> wrote:
Note that this requirement to duplicate big chunks of functionality in sync and async forms is a fundamental aspect of the design. It's not easy to swallow (hence the fact that threads like this keep coming up) as it seems to badly violate DRY principles, but it is deliberate. There are a number of blog posts that discuss this "two separate worlds" approach, some positive, some negative. Links have been posted recently in one of these threads, but I'm afraid I don't have them to hand right now. Paul

Hello, On Mon, 29 Jun 2015 11:57:58 +0100 Paul Moore <p.f.moore@gmail.com> wrote:
Maybe not the links you meant, but definitely discussing a split-world problem designers of other languages and APIs face: What Color is Your Function? http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ Red and Green Callbacks http://joearms.github.io/2013/04/02/Red-and-Green-Callbacks.html
Paul
-- Best regards, Paul mailto:pmiscml@gmail.com

On 24 June 2015 at 10:00, Adam Bartoš <drekin@gmail.com> wrote:
Unfortunately this is not possible with generators or with coroutines. Remember that the async coroutine stuff doesn't actually add any fundamental new capability to the language. It's really just a cleaner syntax for a particular way of using generators. Anything you can do with coroutines is also possible with generators (hence 3.4's asyncio does all its stuff with ordinary generators). The problem is fundamental: iterable consumers like sum, list etc drive the flow control of execution. You can suspend them by feeding in a generator but they aren't really suspended the way a generator is because the consumer remains at the base of the call stack. If you can rewrite the consumers though it is possible to rewrite them in a fairly simple way using generators so that you can push values in suspending after each push. Suppose I have a consumer function that looks like: def func(iterable): <init> for x in iterable: <block> return <expr> I can rewrite it as a feed-in generator like so: def gfunc(): <init> yield lambda: <expr> while True: x = yield <block> When I call this function I get a generator. I can call next on that generator to get a result function. I can then push values in with the send method. When I'm done pushing values in I can call the result function to get the final result. Example:
You can make a decorator to handle the awkwardness of calling the generator and next-ing it. Also you can use the decorator to provide a consumer function with the inverted consumer behaviour as an attribute: import functools def inverted_consumer(func): @functools.wraps(func) def consumer(iterable): push, result = inverted() for x in iterable: push(x) return result() def inverted(): gen = func() try: result = next(gen) next(gen) except StopIteration: raise RuntimeError return gen.send, result consumer.invert = inverted return consumer @inverted_consumer def mean(): total = 0 count = 0 yield lambda: total / count while True: x = yield total += x count += 1 print(mean([4, 5, 6])) # prints 5 push, result = mean.invert() push(4) push(5) push(6) print(result()) # Also prints 5 Having implemented your consumer functions in this way you can use them normally but you can also implement the biconsumer functionailty that you wanted (with obvious generalisation to an N-consumer function): def biconsumer(consumer1, consumer2, iterable): push1, result1 = consumer1.invert() push2, result2 = consumer2.invert() for val1, val2 in iterable: push1(val1) push2(val2) return result1(), result2() Given some of the complaints about two colours of functions in other posts in this thread perhaps asyncio could take a similar approach. There could be a decorator so that I could define an async function with: @sync_callable def func(...): ... Then in asynchronous code I could call it as x = await func() or in synchronous code it would be x = func.sync_call() Presumably the sync_call version would fire up an event-loop and run the function until complete. Perhaps it could also take other arguments and have a signature like: def sync_call_wrapper(args, kwargs, *, loop=None, timeout=None): ... I'm not sure how viable this is given that different asynchronous functions might need different event loops etc. but maybe there's some sensible way to do it. -- Oscar

In my experience, it's much easier to use asyncio Queues for this. Instead of yielding, push to a queue. The consumer can then use "await queue.get()". I think the semantics of the generator become too complicated otherwise, or maybe impossible. Maybe have a look at this article: http://www.interact-sw.co.uk/iangblog/2013/11/29/async-yield-return Jonathan 2015-06-24 12:13 GMT+02:00 Andrew Svetlov <andrew.svetlov@gmail.com>:

Is there a way for a producer to say that there will be no more items put, so consumers get something like StopIteration when there are no more items left afterwards? There is also the problem that one cannot easily feed a queue, asynchronous generator, or any asynchronous iterator to a simple synchronous consumer like sum() or list() or "".join(). It would be nice if there was a way to wrap them to asynchronous ones when needed – something like (async sum)(asynchronously_produced_numbers()). On Wed, Jun 24, 2015 at 1:54 PM, Jonathan Slenders <jonathan@slenders.be> wrote:

I afraid the last will never possible -- you cannot push async coroutines into synchronous convention call. Your example should be converted into `await async_sum(asynchronously_produced_numbers())` which is possible right now. (asynchronously_produced_numbers should be *iterator* with __aiter__/__anext__ methods, not generator with yield expressions inside. On Sun, Jun 28, 2015 at 1:02 PM, Adam Bartoš <drekin@gmail.com> wrote:
-- Thanks, Andrew Svetlov

I understand that it's impossible today, but I thought that if asynchronous generators were going to be added, some kind of generalized generator mechanism allowing yielding to multiple different places would be needed anyway. So in theory no special change to synchronous consumers would be needed – when the asynschronous generator object is created, it gets a link to the scheduler from the caller, then it's given as an argument to sum(); when sum wants next item it calls next() and the asynchronous generator can either yield the next value to sum or it can yield a future to the scheduler and suspend execution of whole task. But since it's a good idea to be explicit and mark each asyncronous call, some wrapper like (async sum) would be used. On Sun, Jun 28, 2015 at 12:07 PM, Andrew Svetlov <andrew.svetlov@gmail.com> wrote:

[Fixing the messed-up reply quoting order] Adam Bartoš schrieb am 28.06.2015 um 12:30:
Stackless might eventually support something like that. That being said, note that by design, the scheduler (or I/O loop, if that's what you're using) always lives *outside* of the whole asynchronous call chain, at its very end, but can otherwise be controlled by arbitrary code itself, and that is usually synchronous code. In your example, it could simply be moved between the first async function and its synchronous consumer ("sum" in your example). Doing that is entirely possible. What is not possible (unless you're using a design like Stackless) is that this scheduler controls its own controller, e.g. that it starts interrupting the execution of the synchronous code that called it. Stefan

Hello, On Sun, 28 Jun 2015 12:02:01 +0200 Adam Bartoš <drekin@gmail.com> wrote:
Sure, just designate sentinel value of your likes (StopIteration class value seems an obvious choice) and use it for that purpose.
All that is easily achievable with classical Python coroutines, not with asyncio garden variety of coroutines, which lately were casted into a language level with async/await disablers: def coro1(): yield 1 yield 2 yield 3 def coro2(): yield from coro1() yield 4 yield 5 print(sum(coro2())) And back to your starter question, it's also possible - and also only with classical Python coroutines. I mentioned not just possibility, but necessity of that in my independent "reverse engineering" of how yield from works https://dl.dropboxusercontent.com/u/44884329/yield-from.pdf (point 9 there). That's simplistic presentation, and in the presence of "syscall main loop", example there would need to be: class MyValueWrapper: def __init__(self, v): self.v = v def pump(ins, outs): for chunk in gen(ins): if isinstance(chunk, MyValueWrapper): # if value we got from a coro is of # type we expect, process it yield from outs.write(chunk.v) else: # anything else is simply not for us, # re-yield it to higher levels (ultimately, mainloop) yield chunk def gen(ins): yield MyValueWrapper("<b>") # Assume read_in_chunks() already yields MyValueWrapper objects yield from ins.read_in_chunks(1000*1000*1000) yield MyValueWrapper("</b>")
-- Best regards, Paul mailto:pmiscml@gmail.com

On 2015-06-28 11:52 AM, Paul Sokolovsky wrote:
You have easily achieved combining two generators with 'yield from' and feeding that to 'sum' builtin. There is no way to combine synchronous loops with asynchronous coroutines; by definition, the entire process will block while you are iterating trough them. Yury

Hello, On Sun, 28 Jun 2015 18:14:20 -0400 Yury Selivanov <yselivanov.ml@gmail.com> wrote:
Right, the point here was that PEP492, banning usage of "yield" in coroutines, doesn't help with such simple and basic usage of them. And then I again can say what I said during initial discussion of PEP492: I have dual feeling about it: promise of making coroutines easier and more user friendly is worth all support, but step of limiting basic language usage in them doesn't seem good. What me and other people can do then is just trust that you guys know what you do and PEP492 will be just first step. But bottom line is that I personally don't find async/await worthy to use for now - it's better to stick to old good yield from, until the promise of truly better coroutines is delivered.
Indeed, to solve this issue, it requires to use "inversion of inversion of control" pattern. Typical real-world example is that someone has got their (unwise) main loop and wants us to do callback mess programming with it, but we don't want them to call us, we want to call them, at controlled intervals, to do controlled amount of work. The solution would be to pass a callback which looks like a normal function, but which is actually a coroutine. Then foreign main loop, calling it, would suspend it and pass control to "us", and us can let another iteration of foreign main loop by resuming that coroutine. The essence of this approach lies in having a coroutine "look like" a usual function, or more exactly, in being able to resume a coroutine from a context of normal function. And that's explicitly not what Python coroutines are - they require lexical marking of each site where coroutine suspension may happen (for good reasons which were described here on the list many times). During previous phase of discussion, I gave classification of different types of coroutines to graps/structure all this stuff better: http://code.activestate.com/lists/python-dev/136046/ -- Best regards, Paul mailto:pmiscml@gmail.com

On 29 June 2015 at 16:44, Paul Sokolovsky <pmiscml@gmail.com> wrote:
The purpose of PEP 492 is to fundamentally split the asynchronous IO use case away from traditional generators. If you're using native coroutines, you MUST have an event loop, or at least be using something like asyncio.run_until_complete() (which spins up a scheduler for the duration). If you're using generators without @types.coroutine or @asyncio.coroutine (or the equivalent for tulip, Twisted, etc), then you're expecting a synchronous driver rather than an asynchronous one. This isn't an accident, or something that will change at some point in the future, it's the entire point of the exercise: having it be obvious both how you're meant to interact with something based on the way it's defined, and how you factor outside subcomponents of the algorithm. Asynchronous driver? Use a coroutine. Synchronous driver? Use a generator. What we *don't* have are consumption functions that have an implied "async for" inside them - functions like sum(), any(), all(), etc are all synchronous drivers. The other key thing we don't have yet? Asynchronous comprehensions. A peak at the various options for parallel execution described in https://docs.python.org/3/library/concurrent.futures.html documentation helps illustrate why: once we're talking about applying reduction functions to asynchronous iterables we're getting into full-blown language-level-support-for-MapReduce territory. Do the substeps still need to be executed in series? Or can the substeps be executed in parallel, and either accumulated in iteration order or as they become available? Does it perhaps make sense to *require* that the steps be executable in parallel, such that we could write the following: result = sum(x*x for async x in coro) Where the reduction step remains synchronous, but we can mark the comprehension/map step as asynchronous, and have that change the generated code to create an implied lambda for the "lambda x: x*x" calculation, dispatch all of those to the scheduler at once, and then produce the results one at a time? The answer to that is "quite possibly, but we don't really know yet". PEP 492 is enough to address some major comprehensibility challenges that exist around generators-as-coroutines. It *doesn't* bring language level support for parallel MapReduce to Python, but it *does* bring some interesting new building blocks for folks to play around with in that regard (in particular, figuring out what we want the comprehension level semantics of "async for" to be). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Not following this in detail, but want to note that async isn't a good model for parallelization (except I/O) because the expectation of coroutines is single threading. The event loop serializes callbacks. Changing this would break expectations and code. On Jun 29, 2015 10:33 AM, "Nick Coghlan" <ncoghlan@gmail.com> wrote:

On 29 Jun 2015 7:33 pm, "Guido van Rossum" <guido@python.org> wrote:
Not following this in detail, but want to note that async isn't a good
model for parallelization (except I/O) because the expectation of coroutines is single threading. The event loop serializes callbacks. Changing this would break expectations and code. Yeah, it's a bad idea - I realised after reading your post that because submission for scheduling and waiting for a result can already be separated it should be possible in Py 3.5 to write a "parallel" asynchronous iterator that eagerly consumes the awaitables produced by another asynchronous iterator, schedules them all, then produces the awaitables in order. (That idea is probably as clear as mud without code to show what I mean...) Regards, Nick.

On 06/29/2015 07:23 AM, Nick Coghlan wrote:
Only the parts concerning "schedules them all", and "produces awaitables in order". ;-) Async IO is mainly about recapturing idle cpu time while waiting for relatively slow io. But it could also be a way to organise asynchronous code. In the earlier example with circles, and each object having it's own thread... And that running into the thousands, it can be rearranged a bit if each scheduler has it's own thread. Then objects can be assigned to schedulers instead of threads. (or something like that.) Of course that's still clear as mud at this point, but maybe a different colour of mud. ;-) Cheers, Ron

On 30 June 2015 at 07:51, Ron Adam <ron3200@gmail.com> wrote:
Some completely untested conceptual code that may not even compile, let alone run, but hopefully conveys what I mean better than English does: def get_awaitables(self, async_iterable): """Gets a list of awaitables from an asynchronous iterator""" asynciter = async_iterable.__aiter__() awaitables = [] while True: try: awaitables.append(asynciter.__anext__()) except StopAsyncIteration: break return awaitables async def wait_for_result(awaitable): """Simple coroutine to wait for a single result""" return await awaitable def iter_coroutines(async_iterable): """Produces coroutines to wait for each result from an asynchronous iterator""" for awaitable in get_awaitables(async_iterable): yield wait_for_result(awaitable) def iter_tasks(async_iterable, eventloop=None): """Schedules event loop tasks to wait for each result from an asynchronous iterator""" if eventloop is None: eventloop = asyncio.get_event_loop() for coroutine in iter_coroutines(async_iterable): yield eventloop.create_task(coroutine) class aiter_parallel: """Asynchronous iterator to wait for several asynchronous operations in parallel""" def __init__(self, async_iterable): # Concurrent evaluation of future results is launched immediately self._tasks = tasks = list(iter_tasks(async_iterable)) self._taskiter = iter(tasks) def __aiter__(self): return self def __anext__(self): try: return next(self._taskiter) except StopIteration: raise StopAsyncIteration # Example reduction function async def sum_async(async_iterable, start=0): tally = start async for x in aiter_parallel(async_iterable): tally += x return x # Parallel sum from synchronous code: result = asyncio.get_event_loop().run_until_complete(sum_async(async_iterable)) # Parallel sum from asynchronous code: result = await sum_async(async_iterable)) As the definition of "aiter_parallel" shows, we don't offer any nice syntactic sugar for defining asynchronous iterators yet (hence the question that started this thread). Hopefully the above helps illustrate the complexity hidden behind such a deceptively simple question :) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 06/30/2015 12:08 AM, Nick Coghlan wrote:
On 30 June 2015 at 07:51, Ron Adam <ron3200@gmail.com> wrote:
On 06/29/2015 07:23 AM, Nick Coghlan wrote:
It seems (to me) like there are more layers here than needed. I suppose since this is a higher order functionality, it may be the nature of it. <shrug>
While browsing the asyncio module, I decided to take a look at the multiprocessing module... from multiprocessing import Pool def async_map(fn, args): with Pool(processes=4) as pool: yield from pool.starmap(fn, args) def add(a, b): return a + b values = [(1, 2), (3, 4), (5, 6), (7, 8), (9, 10)] print(sum(async_map(add, values))) #---> 55 That's really very nice. Are there advantages to asyncio over the multiprocessing module? Cheers, Ron

On 1 July 2015 at 14:25, Ron Adam <ron3200@gmail.com> wrote:
That's really very nice. Are there advantages to asyncio over the multiprocessing module?
I find it most useful to conceive of asyncio as an implementation of an "event driven programming" development paradigm. This means that after starting out with imperative procedural programming in Python, you can branch out into other more advanced complexity management models like object-oriented programming (class statements, dunder protocols, descriptors, metaclasses, type hinting), functional programming (comprehensions, generators, decorators, closures, functools, itertools), array oriented programming (memoryview, __matmul__, NumPy, SciPy), and event driven programming (asyncio, Twisted, Tornado). The stark difference between event driven programming and the first three alternate development models I noted is that you can readily implement the notions of "imperative shell, OO core", "imperative shell, functional core", and "imperative shell, array oriented core", where you expose a regular procedural API to other code, and implement it internally using whichever approach makes the most sense for your particular component. Even generators follow this same basic notion of having a clear "start of iteration" and "end of iteration". The concurrent execution model that most readily aligns with this "imperative shell" approach is concurrent.futures (https://docs.python.org/3/library/concurrent.futures.html) - it's designed to let you easily take particular input->output operations and dispatch them for execution in parallel in separate threads or processes. By contrast, event driven programming fundamentally changes your execution model from "I will accept inputs at the beginning of the program, and produce outputs at the end of the program" to "I will start waiting for events, responding to them as they arrive, until one of them indicates I should cease operation". "Waiting for an event" becomes a core development concept, as now indicated by the "await" keyword in PEP 492. The "async" keyword in that same PEP indicates that the marked construct may need to wait for events as part of its operation (async def, async for, async with), but exactly *where* those wait points are depends on the specific construct (await expressions in the function body for async def, protocol method invocations for async for and async with). For the design of asyncio (and similar frameworks) to make any sense at all, it's necessary to approach them with that "event driven programming" mindset - they seem entirely nonsensical when approached with an inputs -> outputs algorithmic mindset, but more logical when considered from a "receive request -> send other requests -> receive responses -> send response" perspective. For folks that primarily deal with algorithmic problems where inputs are converted to outputs, the event driven model addresses a kind of problem that *they don't have*, so it can seem entirely pointless. However, there really are significant categories of problems (such as network service development) where the event driven model is a genuinely superior design tool. Like array oriented programming (and even object-oriented and functional programming), the benefits can unfortunately be hard to explain to folks that haven't personally experienced the problems these tools address, so folks end up having to take it on faith that we're applying the "Complex is better than complicated" line from the Zen of Python when introducing new modelling techniques into the core language. Regards, Nick. P.S. It *is* possible to implement the "imperative shell, event driven core" model, but it requires a synchronous-to-asynchronous adapter like gevent, or an event-loop-per-thread model and extensive use of "run_until_complete()". It's much more complex than "just use concurrent.futures". P.P.S. Credit to Gary Bernhardt for the "imperative shell, <X> core" phrasing for low coupling component design where the external API design is distinct from the internal architectural design -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 07/01/2015 01:56 AM, Nick Coghlan wrote:
It makes since to think of events as IO. Then on top of that, have some sort of dispatch mechanism, which could be objective, imperative, or functional. A robot control program would be a good practical example to use to test these things. I think it's a good direction for python to go in. They need to process multiple sensor inputs, and control multiple external devices all at the same time. I think anything that makes that easier will be good. (It is the future.)
I'm not sure what a event driven core is exactly. It seems to me it would be an event driven (functional, objective, imperative) core. The closest thing I can think of that wouldn't be one of those would be a neural net. Of course it may also be a matter of how we choose to think of things. It's quite possible to have many layers of imperative, functional, and objective, on top of each other. Then we need to indicate the part of the program we are referring to as being X shell/Y core. Cheers, Ron

On 1 July 2015 at 06:56, Nick Coghlan <ncoghlan@gmail.com> wrote:
Hmm, I see what you're getting at here, but my "event driven model" background is with GUI event loops, not with event driven IO. The async/await approach still gives me problems, because I can't map the details of the approach onto the problem domain I'm familiar with. What I can't quite work out is whether that's simply because asyncio is fundamentally designed around the IO problem (the module name suggests that might be the case, but a lot of the module content around tasks, etc, doesn't seem to be), and so doesn't offer any sort of mental model for understanding how GUI event loop code based on async/await would work, or if it's because the async/await design genuinely doesn't map well onto GUI event loop problems. I've been poking in the background at trying to decouple the "IO" aspects of asyncio from the "general" ones, but honestly, I'm not getting very far yet. I think what I need to do is to work out how to write a GUI event loop that drives async/await style coroutines, and see if that helps me understand the model better. But there aren't many examples of event loops around to work from (the asyncio one is pretty complex, and it's hard to know how much of that complexity is needed, and how much is because it was developed before async/await were available). So while I agree that if you don't need an event driven model, it can seem like pointless complexity, I *also* think that the pure callback approach to event driven code is what feels "obvious" to most people. It's maybe not the easiest model to code with, but it is the easiest one to think about - and mentally making the link between callbacks and async/await isn't straightforward. So even though people can understand event-driven problems, they can't, without experience, see how async/await *addresses* that problem. Paul

On 2 July 2015 at 19:57, Paul Moore <p.f.moore@gmail.com> wrote:
If an operation doesn't need to wait for IO itself, then it can respond immediately using a normal callback (just as a generator is useful for implementing iterators, but would be pointless for a normal function call). async/await is more useful for multi-step processes, and for persistent monitoring of a data source in an infinite loop (e.g. listening for push notifications from a server process). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 07/02/2015 05:57 AM, Paul Moore wrote:
Yes, I think there are some parts to it that are difficult to understand still. That could be a documentation thing. Consider a routine roughly organised like this: event_loop: item_loop1: action1 <-- wait for event1 item_loop2: action2 <-- wait for event2 other_things_loop: ... sleep # continue event_loop It's not clear to me how to write that with asyncio yet. But I don't think I'm alone. Cheers, Ron

One way for a GUI developer to get the hang of the point of asyncio is to imagine this problem: I've written a synchronous, modal, menu-driven app. I want to turn that into a GUI app, maybe using a wizard-like design, with a pane for each menu. To do that, I basically have to turn my control-flow inside-out. But if, at each step, I could just put up the next pane and "await" the user's response, my code would look much the same as the original CLI code. And if I wanted to let the user fire off multiple "wizards" in parallel (an MDI interface, treating each one as a document), it would just work, because each wizard is a separate coroutine that spends most of its time blocked on the event loop. The difference between a server and an MDI app is that you usually need hundreds or thousands of connections as opposed to a handful of documents, but the control flow for each is usually more linear, so the wizard-like design is a much more obvious choice.

On 3 July 2015 at 13:16, Andrew Barnert <abarnert@yahoo.com> wrote:
The difference between a server and an MDI app is that you usually need hundreds or thousands of connections as opposed to a handful of documents, but the control flow for each is usually more linear, so the wizard-like design is a much more obvious choice.
Ah, thank you - yes, the "stepping through a wizard" case is a good example, as it hits the same kind multi-step process that causes problems with network applications. Simple request-response cases are easy to handle with callbacks: "event A happens, invoke callback B, which will trigger action C". If things stop there, you're fine. Things get messier when they start looking like this: "event A happens, invoking callback B, which triggers action C after setting up callback D to wait for event E, which triggers action F after setting up callback G to wait for event H and finally trigger action I" This is where coroutines help, as that second case instead becomes: "event A happens, invoking coroutine B, which triggers action C, then waits for event E, then triggers action F, then waits for event H, then triggers the final action I" Rather than having to create a new callback to handle each new action-event pair, you can instead have a single coroutine which triggers an action, and then waits for the corresponding event, and may do this multiple times before terminating. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 29 June 2015 at 09:32, Nick Coghlan <ncoghlan@gmail.com> wrote:
Note that this requirement to duplicate big chunks of functionality in sync and async forms is a fundamental aspect of the design. It's not easy to swallow (hence the fact that threads like this keep coming up) as it seems to badly violate DRY principles, but it is deliberate. There are a number of blog posts that discuss this "two separate worlds" approach, some positive, some negative. Links have been posted recently in one of these threads, but I'm afraid I don't have them to hand right now. Paul

Hello, On Mon, 29 Jun 2015 11:57:58 +0100 Paul Moore <p.f.moore@gmail.com> wrote:
Maybe not the links you meant, but definitely discussing a split-world problem designers of other languages and APIs face: What Color is Your Function? http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ Red and Green Callbacks http://joearms.github.io/2013/04/02/Red-and-Green-Callbacks.html
Paul
-- Best regards, Paul mailto:pmiscml@gmail.com

On 24 June 2015 at 10:00, Adam Bartoš <drekin@gmail.com> wrote:
Unfortunately this is not possible with generators or with coroutines. Remember that the async coroutine stuff doesn't actually add any fundamental new capability to the language. It's really just a cleaner syntax for a particular way of using generators. Anything you can do with coroutines is also possible with generators (hence 3.4's asyncio does all its stuff with ordinary generators). The problem is fundamental: iterable consumers like sum, list etc drive the flow control of execution. You can suspend them by feeding in a generator but they aren't really suspended the way a generator is because the consumer remains at the base of the call stack. If you can rewrite the consumers though it is possible to rewrite them in a fairly simple way using generators so that you can push values in suspending after each push. Suppose I have a consumer function that looks like: def func(iterable): <init> for x in iterable: <block> return <expr> I can rewrite it as a feed-in generator like so: def gfunc(): <init> yield lambda: <expr> while True: x = yield <block> When I call this function I get a generator. I can call next on that generator to get a result function. I can then push values in with the send method. When I'm done pushing values in I can call the result function to get the final result. Example:
You can make a decorator to handle the awkwardness of calling the generator and next-ing it. Also you can use the decorator to provide a consumer function with the inverted consumer behaviour as an attribute: import functools def inverted_consumer(func): @functools.wraps(func) def consumer(iterable): push, result = inverted() for x in iterable: push(x) return result() def inverted(): gen = func() try: result = next(gen) next(gen) except StopIteration: raise RuntimeError return gen.send, result consumer.invert = inverted return consumer @inverted_consumer def mean(): total = 0 count = 0 yield lambda: total / count while True: x = yield total += x count += 1 print(mean([4, 5, 6])) # prints 5 push, result = mean.invert() push(4) push(5) push(6) print(result()) # Also prints 5 Having implemented your consumer functions in this way you can use them normally but you can also implement the biconsumer functionailty that you wanted (with obvious generalisation to an N-consumer function): def biconsumer(consumer1, consumer2, iterable): push1, result1 = consumer1.invert() push2, result2 = consumer2.invert() for val1, val2 in iterable: push1(val1) push2(val2) return result1(), result2() Given some of the complaints about two colours of functions in other posts in this thread perhaps asyncio could take a similar approach. There could be a decorator so that I could define an async function with: @sync_callable def func(...): ... Then in asynchronous code I could call it as x = await func() or in synchronous code it would be x = func.sync_call() Presumably the sync_call version would fire up an event-loop and run the function until complete. Perhaps it could also take other arguments and have a signature like: def sync_call_wrapper(args, kwargs, *, loop=None, timeout=None): ... I'm not sure how viable this is given that different asynchronous functions might need different event loops etc. but maybe there's some sensible way to do it. -- Oscar
participants (12)
-
Adam Bartoš
-
Andrew Barnert
-
Andrew Svetlov
-
Guido van Rossum
-
Jonathan Slenders
-
Nick Coghlan
-
Oscar Benjamin
-
Paul Moore
-
Paul Sokolovsky
-
Ron Adam
-
Stefan Behnel
-
Yury Selivanov