[New-bugs-announce] [issue39617] max_workers argument to concurrent.futures.ProcessPoolExecutor is not flexible enough

sds report at bugs.python.org
Wed Feb 12 12:21:04 EST 2020


New submission from sds <sds at gnu.org>:

The number of workers (max_workers) I want to use often depends on the server load.
Imagine this scenario: I have 64 CPUs and I need to run 200 processes.
However, others are using the server too, so currently loadavg is 50, thus I will set `max_workers` to (say) 20. 
But 2 hours later when those 20 processes are done, loadavg is now 0 (because the 50 processes run by my colleagues are done too), so I want to increase the pool size max_workers to 70.
It would be nice if it were possible to adjust the pool size depending on the server loadavg when a worker is started.
Basically, the intent is maintaining a stable load average and full resource utilization.

----------
components: Library (Lib)
messages: 361905
nosy: sam-s
priority: normal
severity: normal
status: open
title: max_workers argument to concurrent.futures.ProcessPoolExecutor is not flexible enough
type: enhancement
versions: Python 3.8

_______________________________________
Python tracker <report at bugs.python.org>
<https://bugs.python.org/issue39617>
_______________________________________


More information about the New-bugs-announce mailing list