Add os.usable_cpu_count()

Recently os.cpu_count() on Windows has been fixed in order to take process groups into account and return the number of all available CPUs: http://bugs.python.org/issue30581 This made me realize that os.cpu_count() does not return the number of *usable* CPUs, which could possibly represent a better default value for things like multiprocessing and process pools. It is currently possible to retrieve this info on UNIX with len(os.sched_getaffinity(0)) which takes CPU affinity and (I think) Linux cgroups into account, but it's not possible to do the same on Windows which provides this value via GetActiveProcessorCount() API. As such I am planning to implement this in psutil but would like to know how python-ideas feels about adding a new os.usable_cpu_count() function (or having os.cpu_count(usable=True)). Thoughts? -- Giampaolo - http://grodola.blogspot.com

On Mon, Sep 4, 2017 at 8:59 PM, Giampaolo Rodola' <g.rodola@gmail.com> wrote:
This was discussed in https://bugs.python.org/issue23530 It looks like the resolution at that time was: - os.cpu_count() should *not* report the number of CPUs accessible to the current process, but rather continue to report the number of CPUs that exist in the system (whatever that means in these days of virtualization... e.g. if you use KVM to set up a virtual machine with limited CPUs, then that changes os.cpu_count, but if you do it with docker then that doesn't). - multiprocessing and similar should continue to treat os.cpu_count() as if it returned the number of CPUs accessible to the current process. Possibly some lines got crossed there... -n -- Nathaniel J. Smith -- https://vorpus.org

On Tue, Sep 5, 2017 at 12:59 PM, Nathaniel Smith <njs@pobox.com> wrote:
I agree current os.cpu_count() behavior should remain unchanged. Point is multiprocessing & similar are currently not taking CPU affinity and Linux cgroups into account, spawning more processes than necessary. -- Giampaolo - http://grodola.blogspot.com

On Mon, Sep 4, 2017 at 8:59 PM, Giampaolo Rodola' <g.rodola@gmail.com> wrote:
This was discussed in https://bugs.python.org/issue23530 It looks like the resolution at that time was: - os.cpu_count() should *not* report the number of CPUs accessible to the current process, but rather continue to report the number of CPUs that exist in the system (whatever that means in these days of virtualization... e.g. if you use KVM to set up a virtual machine with limited CPUs, then that changes os.cpu_count, but if you do it with docker then that doesn't). - multiprocessing and similar should continue to treat os.cpu_count() as if it returned the number of CPUs accessible to the current process. Possibly some lines got crossed there... -n -- Nathaniel J. Smith -- https://vorpus.org

On Tue, Sep 5, 2017 at 12:59 PM, Nathaniel Smith <njs@pobox.com> wrote:
I agree current os.cpu_count() behavior should remain unchanged. Point is multiprocessing & similar are currently not taking CPU affinity and Linux cgroups into account, spawning more processes than necessary. -- Giampaolo - http://grodola.blogspot.com
participants (2)
-
Giampaolo Rodola'
-
Nathaniel Smith