[Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

Stephan Hoyer shoyer at gmail.com
Mon Apr 11 12:37:41 EDT 2016

On Mon, Apr 11, 2016 at 5:39 AM, Matěj Týč <matej.tyc at gmail.com> wrote:

> * ... I do see some value in providing a canonical right way to
> construct shared memory arrays in NumPy, but I'm not very happy with
> this solution, ... terrible code organization (with the global
> variables):
> * I understand that, however this is a pattern of Python
> multiprocessing and everybody who wants to use the Pool and shared
> data either is familiar with this approach or has to become familiar
> with[2, 3]. The good compromise is to have a separate module for each
> parallel calculation, so global variables are not a problem.

OK, we can agree to disagree on this one. I still don't think I could get
code using this pattern checked in at my work (for good reason).

> * If there's some way to we can paper over the boilerplate such that
users can use it without understanding the arcana of multiprocessing,
> then yes, that would be great. But otherwise I'm not sure there's
> anything to be gained by putting it in a library rather than referring
> users to the examples on StackOverflow [1] [2].
> * What about telling users: "You can use numpy with multiprocessing.
> Remeber the multiprocessing.Value and multiprocessing.Aray classes?
> numpy.shm works exactly the same way, which means that it shares their
> limitations. Refer to an example: <link to numpy doc>." Notice that
> although those SO links contain all of the information, it is very
> difficult to get it up and running for a newcomer like me few years
> ago.

I guess I'm still not convinced this is the best we can with the
multiprocessing library. If we're going to do this, then we definitely need
to have the fully canonical example.

For example, could you make the shared array a global variable and then
still pass references to functions called by the processes anyways? The
examples on stackoverflow that we're both looking are varied enough that
it's not obvious to me that this is as good as it gets.

* This needs tests and justification for custom pickling methods,
> which are not used in any of the current examples. ...
> * I am sorry, but don't fully understand that point. The custom
> pickling method of shmarray has to be there on Windows, but users
> don't have to know about it at all. As noted earlier, the global
> variable is the only way of using standard Python multiprocessing.Pool
> with shared objects.

That sounds like a fine justification, but given that it wasn't obvious you
needs a comment saying as much in the source code :). Also, it breaks
pickle, which is another limitation that needs to be documented.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20160411/462eeebf/attachment.html>

More information about the NumPy-Discussion mailing list