[Python-ideas] The async API of the future: yield-from

Greg Ewing greg.ewing at canterbury.ac.nz
Mon Oct 15 07:58:35 CEST 2012

Guido van Rossum wrote:

> Why wouldn't all generators that aren't blocked for I/O just run until
> their next yield, in a round-robin fashion? That's fair enough for me.
> But as I said, my intuition for how things work in Greg's world is not
> very good.

That's exactly how my scheduler behaves.

> OTOH I am okay with only getting one of the exceptions. But I think
> all of the remaining tasks should still be run to completion -- maybe
> the caller just cared about their side effects. Or maybe this should
> be an option to par().

This is hard to answer without considering real use cases,
but my feeling is that if I care enough about the results of
the subtasks to wait until they've all completed before continuing,
then if anything goes wrong in any of them, I might as well abandon
the whole computation.

If that's not the case, I'd be happy to wrap each one in a
try-except that doesn't propagate the exception to the main
task, but just records the information that the subtask
failed somewhere, for the main task to check afterwards.

Another direction to approach this is to consider that par()
ought to be just an optimisation -- the result should be the same
as if you'd written sequential code to perform the subtasks
one after another. And in that case, an exception in one would
prevent any of the following ones from executing, so it's fine
if par() behaves like that, too.


More information about the Python-ideas mailing list