Howdy fellows,

I had a thought of adding a global thread executor in python.

It can use something like concurrent.futures, and be created on demand.

There are times when I find myself needing to run just a single function in a thread, cause there's practically no other way. The function is a fast-acting one, and needs to run infrequently throughout the code.

Let's take socket.getaddrinfo for example. Even in non-blocking mode, this function will still always block. Even asyncio solves it by running it on a different thread.

Each program, library or piece of code that wants to use it in an asynchronous manner, will have to initialize a thread (with all the respective overhead), for the use of a single function. This can happen for plenty of libraries, each initializing their own thread for a one-night stand.

If we would have had a global ThreadPoolExecutor, we could have use it exactly for that. The threads would be shared, the overhead would only occur once. Users of the executor will know that it's a limited resource that may be full at times, and as responsible programmers will not use it for infinite loops, clogging the whole system.

Even if a programmer uses it irresponsibly (shame), we still have future.result(timeout) to cover us and issue a warning if we must.

Any inputs would be appreciated,
Bar Harel.