It is out of scope of Pythonmultiprocessing, and, as I perceive it, from
the stdlib as a whole to be able to allocate specific cores for each subprocess -
that is automatically done by the O.S. (and of course, the O.S. having an interface
for it, one can write a specific Python library which would allow this granularity,
and it could even check core capabilities).
As it stands however, is that you simply have to change your approach:
instead of dividing yoru workload into different cores before starting, the
common approach there is to set up worker processes, one per core, or
per processor thread, and use those as a pool of resources to which
you submit your processing work in chunks.
In that way, if a worker happens to be in a faster core, it will be
done with its chunk earlier and accept more work before
slower cores are available.
If you use "concurrent.futures" or a similar approach, this pattern will happen naturally with no
specific fiddling needed on your part.