
Sven R. Kunze writes:
However, the main intention has not been changed: lowering the entry barriers.
AFAICS, the entry barriers to concurrency are quite low for the class of problems most like your examples as they were posted: just use process pools. Sure, the API is function calls or method applications rather than dedicated syntax, but that's very Pythonic: don't do in syntax what can be done in a function. (Cf. print.) And it's true that a bit of boilerplate is involved, but it seems to me that if that's an entry barrier, a "SimpleProcessPool" class that handles the boilerplate is the way to go: from multiprocessing import SimpleProcessPool as TaskPool p = TaskPool() list_of_futures = [p.start_task(task) for task in tasks] If it's suitable for threads, just change "multiprocessing" to "threading" and "SimpleProcessPool" to "SimpleThreadPool". Asyncio is less mature, but I suppose it could probably be shoehorned into this "TaskPool" framework, too. How much lower can the entry barrier get? I know you said you're willing to give up some generality to handle common cases, but (speaking for myself, maybe Andrew and Nick have a clue I don't) I don't see at all what those cases are. The ones I can think of, that are simple enough to be automatically handled by the compiler and runtime I imagine, fit the multiprocessing/message-passing model well. The complicated use cases are just hard to do concisely, safely, and efficiently in any model. Although different models make different aspects easy, none of them make everything easy, and the "optimal" choice of model is a matter of the "art of programming". I think you need to come up with one or more compelling examples of real use cases that are dramatically simplified by "fork" syntax or a similar device.