On Feb 15, 2020, at 20:29, Kyle Stanley <aeros167@gmail.com> wrote:

Add a SerialExecutor, which does not use threads or processes

Andrew Barnert wrote:
> e.g., in C++, you only use executors via the std::async function, and you can just pass a launch option instead of an executor to run synchronously

In the case of C++'s std::async though, it still launches a thread to run the function within, no?

No; the point of launch policies is that you can (without needing an executor object[1]) tell the task to run “async” (on its own thread[2]), “deferred” (serially[3] on first demand), or “immediate” (serially right now)[4]. You can even or together multiple policies to let the implementation choose, and IIRC the default is async|deferred.

At any rate, I’m not suggesting that C++ is a design worth looking at, just parenthetically noting it as an example of how when libraries don’t have a serial executor, it’s often because they already have a different way to specify the same thing.

This doesn't require the user to explicitly create or interact with the thread in any way, but that seems to go against what OP was looking for:

Jonathan Crall wrote:
> Often times a develop will want to run a task in parallel, but depending on the environment they may want to disable threading or process execution.

The *concrete* purpose of what that accomplishes (in the context of CPython) isn't clear to me. How exactly are you running the task in parallel without using a thread, process, or coroutine [1]?

I’m pretty sure what he meant is that the developer _usually_ wants the task to run in parallel, but in some specific situation he wants it to _not_ run in parallel.

The concrete use case I’ve run into is this: I’ve got some parallel code that has a bug. I’m pretty sure the bug isn’t actually related to the shared data or the parallelism itself, but I want to be sure. I replace the ThreadPoolExecutor with a SyncExecutor and change nothing else about the code, and the bug still happens. Now I’ve proven that the bug isn’t related to parallelism. And, as a bonus, I’ve got nice logs that aren’t interleaved into a big mess, so it’s easier to track down the problem.

I have no idea if this is Jonathan’s use, but it is the reason I’ve built something similar myself.

—-

[1] Actually, the version that got into C++11 doesn’t even have executors, only launch policies. It also doesn’t have then continuation methods, composing functions like all and as_completed, … It’s basically useless. All of those other features got deferred to a tech specification that was supposed to be before C++14 but got pushed back repeatedly until it came out after C++17, and then got withdrawn, and now they’re awaiting proposals for a second TS to come. Which will probably be after the language has first-class coroutines and maybe fibers, and async/await, so they may well have to redesign the whole futures model yet again to make futures awaitable…

[2] Actually “as if on its own thread”. But AFAIK, every implementation handles this by spawning a thread. I think the distinction is for future expansions, either so they can do something like Java’s ForkJoinPool, or so they can use fibers or coroutines that don’t care what thread they’re on.

[3] In C++ futures lingo, “serial” actually means an executor that runs all tasks on a single background thread, with a queue that’s guaranteed to be mutex-locked rather than lock-free. But I mean “serial” in Jonathan’s sense here.

[4] Checking the docs, it looks like the immediate policy didn’t make into C++11 either. But anyway, the deferred policy did, and that’s serial in Jonathan’s sense.