[issue21116] Failure to create multiprocessing shared arrays larger than 50% of memory size under linux

Médéric Boquien report at bugs.python.org
Thu Apr 3 11:01:52 CEST 2014


Médéric Boquien added the comment:

"the process will get killed when first writing to the page in case of memory pressure."

According to the documentation, the returned shared array is zeroed. https://docs.python.org/3.4/library/multiprocessing.html#module-multiprocessing.sharedctypes

In that case because the entire array is written at allocation, the process is expected to get killed if allocating more memory than available. Unless I am misunderstanding something, which is entirely possible.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue21116>
_______________________________________


More information about the Python-bugs-list mailing list