Sent from a random iPhone

Begin forwarded message:

From: Andrew Barnert <>
Date: January 21, 2014, 4:20:19 PST
To: Ram Rachum <>
Cc: "" <>
Subject: Re: [Python-ideas] Add `n_threads` argument to `concurrent.futures.ProcessPoolExecutor`

On Jan 21, 2014, at 2:17, Ram Rachum <> wrote:

If you're writing code that needs to use both a lot of IO and a lot of CPU. For example, you're downloading many items from the internet and then doing post-processing on them.

Yes, but in that case, how could a single executor with n processes and m threads help at all? You can only have one thread per process doing CPU work; they're still going to end up blocking each other.

And this is very easy to solve: run the downloads on a thread pool, and as each one finishes, kick its post processing off to a process pool.

But you should be able to build the two-tier pool in under half an hour, and then you can test to find applications where it really does or doesn't help.

On Tue, Jan 21, 2014 at 10:42 AM, Andrew Barnert <> wrote:
On Jan 17, 2014, at 5:00, Ram Rachum <> wrote:

> Hi,
> I'd like to use `concurrent.futures.ProcessPoolExecutor` but have each process contain multiple worker threads. We could have an `n_threads` argument to the constructor, defaulting to 1 to maintain backward compatibility, and setting a value higher than 1 would cause multiple threads to be spawned in each process.

What for?

Generally you use processes because you can't use threads. Whether this is because you're running CPU-bound code that needs to get around the GIL, because you want complete isolation between tasks, because your platform doesn't support threads, or any other reason I can think of, you wouldn't want threads per process either.

There are use cases for multiple processes of multiple threads, like running four independent IOCP-based servers (let them all try to use all your cores and let the kernel load balance among them), or isolated tasks with sharing-based subtasks... But those kinds of uses don't make sense in a single executor.