On Thu, Aug 19, 2021 at 12:52 AM Marc-Andre Lemburg mal@egenix.com wrote:
On 18.08.2021 15:58, Chris Angelico wrote:
On Wed, Aug 18, 2021 at 10:37 PM Joao S. O. Bueno jsbueno@python.org.br wrote:
So, It is out of scope of Pythonmultiprocessing, and, as I perceive it, from the stdlib as a whole to be able to allocate specific cores for each subprocess - that is automatically done by the O.S. (and of course, the O.S. having an interface for it, one can write a specific Python library which would allow this granularity, and it could even check core capabilities).
Python does have a way to set processor affinity, so it's entirely possible that this would be possible. Might need external tools though.
There's os.sched_setaffinity(pid, mask) you could use from within a Python task scheduler, if this is managing child processes (you need the right permissions to set the affinity).
Right; I meant that it might require external tools to find out which processors you want to align with.
Or you could use the taskset command available on Linux to fire up a process on a specific CPU core. lscpu gives you more insight into the installed set of available cores.
Yes, those sorts of external tools.
It MAY be possible to learn about processors by reading /proc/cpuinfo, but that'd still be OS-specific (no idea which Unix-like operating systems have that, and certainly Windows doesn't).
All in all, far easier to just divide the job into far more pieces than you have processors, and then run a pool.
ChrisA