
On Wed, Jun 24, 2015 at 9:26 AM, Sturla Molden <sturla.molden@gmail.com> wrote:
On 24/06/15 07:01, Eric Snow wrote: There are two major competing standards for parallel computing in science and engineering: OpenMP and MPI. OpenMP is based on a shared memory model. MPI is based on a distributed memory model and use message passing (hence its name). [snip]
Thanks for the great explanation!
Solving reference counts in this situation is a separate issue that will likely need to be resolved, regardless of which machinery we use to isolate task execution.
As long as we have a GIL, and we need the GIL to update a reference count, it does not hurt so much as it otherwise would. The GIL hides most of the scalability impact by serializing flow of execution.
It does hurt in COW situations, e.g. forking. My expectation is that we'll at least need to take a serious look into the matter in the short term (i.e. Python 3.6).
IPC sounds great, but how well does it interact with Python's memory management/allocator? I haven't looked closely but I expect that multiprocessing does not use IPC anywhere.
multiprocessing does use IPC. Otherwise the processes could not communicate. One example is multiprocessing.Queue, which uses a pipe and a semaphore.
Right. I don't know quite what I was thinking. :) -eric