[Python-ideas] Making concurrent.futures.Futures awaitable

Alex Grönholm alex.gronholm at nextday.fi
Sat Aug 8 14:47:46 CEST 2015


08.08.2015, 11:12, Nick Coghlan kirjoitti:
> On 8 August 2015 at 03:08, Guido van Rossum <guido at python.org> wrote:
>> FWIW, I am against this (as Alex already knows), for the same reasons I
>> didn't like Nick's proposal. Fuzzing the difference between threads and
>> asyncio tasks is IMO asking for problems -- people will stop understanding
>> what they are doing and then be bitten when they least need it.
> I'm against concurrent.futures offering native asyncio support as well
> - that dependency already goes the other way, from asyncio down to
> concurrent.futures by way of the loop's pool executor.
Nobody is suggesting that. The __await__ support suggested for 
concurrent Futures is generic and has no ties whatsoever to asyncio.
> The only aspect of my previous suggestions I'm still interested in is
> a name and signature change from "loop.run_in_executor(executor,
> callable)" to "loop.call_in_background(callable, *, executor=None)".
That name would and argument placement would be better, but are you 
suggesting that the ability to pass along extra arguments should be 
removed? The original method was bad enough in that it only supported 
positional and not keyword arguments, forcing users to pass partial() 
objects as callables.
> Currently, the recommended way to implement a blocking call like
> Alex's example is this:
>
>      from asyncio import get_event_loop
>
>      async def handler(self):
>          loop = asyncio.get_event_loop()
>          result = await loop.run_in_executor(None,
> some_blocking_api.some_blocking_call)
>          await self.write(result)
>
> I now see four concrete problems with this specific method name and signature:
>
>      * we don't run functions, we call them
>      * we do run event loops, but this call doesn't start an event loop running
>      * "executor" only suggests "background call" to folks that already
> know how concurrent.futures works
>      * we require the explicit "None" boilerplate to say "use the
> default executor", rather than using the more idiomatic approach of
> accepting an alternate executor as an optional keyword only argument
>
> With the suggested change to the method name and signature, the same
> example would instead look like:
>
>      async def handler(self):
>          loop = asyncio.get_event_loop()
>          result = await
> loop.call_in_background(some_blocking_api.some_blocking_call)
>          await self.write(result)
Am I the only one who's bothered by the fact that you have to get a 
reference to the event loop first?
Wouldn't this be better:

async def handler(self):

     result = await asyncio.call_in_background(some_blocking_api.some_blocking_call)

     await self.write(result)


The call_in_background() function would return an awaitable object that 
is recognized by the asyncio Task class, which would then submit the 
function to the default executor of the event loop.
> That should make sense to anyone reading the handler, even if they
> know nothing about concurrent.futures - the precise mechanics of how
> the event loop goes about handing off the call to a background thread
> or process is something they can explore later, they don't need to
> know about it in order to locally reason about this specific handler.
>
> It also means that event loops would be free to implement their
> *default* background call functionality using something other than
> concurrent.futures, and only switch to the latter if an executor was
> specified explicitly.
Do you mean background calls that don't return objects compatible with 
concurrent.futures.Futures?
Can you think of a use case for this?
>
> There are still some open questions about whether it makes sense to
> allow callables to indicate whether or not they expect to be IO bound
> or CPU bound,
What do you mean by this?
>   and hence allow event loop implementations to opt to
> dispatch the latter to a process pool by default
Bad idea! The semantics are too different and process pools have too 
many limitations.
>   (I saw someone
> suggest that recently, and I find the idea intriguing), but I think
> that's a separate question from dispatching a given call for parallel
> execution, with the result being awaited via a particular event loop.
>
> Cheers,
> Nick.
>



More information about the Python-ideas mailing list