[SciPy-User] [mpi4py] MPI, threading, and the GIL
Aron Ahmadia
aron at ahmadia.net
Fri Sep 9 15:45:32 EDT 2011
Hey Matt,
*More specifically, our iterative algorithm needs to send data from
rank N to rank N+1, but the rank N+1 processor doesn't need this data
immediately - it has to do a few other things before it needs it. For
each MPI process, I have three threads: one thread for computations,
one thread for doing MPI sends, and one thread for doing MPI receives.
*
This is not idiomatic MPI. You can do the same thing with a single thread
(and avoid GIL issues) by posting non-blocking sends and receives
(MPI_Isend/MPI_Irecv) when you have the data to send and then issuing a
'wait' when you need the data to proceed on the receiving end.
Aron
On Fri, Sep 9, 2011 at 10:41 PM, Matthew Emmett <memmett at unc.edu> wrote:
> Hi everyone,
>
> I am having trouble with MPI send/recv calls, and am wondering if I
> have come up against the Python GIL. I am using mpi4py with MVAPICH2
> and the threading Python module.
>
> More specifically, our iterative algorithm needs to send data from
> rank N to rank N+1, but the rank N+1 processor doesn't need this data
> immediately - it has to do a few other things before it needs it. For
> each MPI process, I have three threads: one thread for computations,
> one thread for doing MPI sends, and one thread for doing MPI receives.
>
> I have set this up in a similar manner to the sendrev.py example here:
>
>
> http://code.google.com/p/mpi4py/source/browse/trunk/demo/threads/sendrecv.py
>
> The behavior that I have come across is the following: the time taken
> for each iteration of the computational part varies quite a bit. It
> should remain roughly constant, which I have confirmed in other tests.
> After all, the amount of work done in the computational part remains
> the same during each iteration. It seems like the threads are not
> running as smoothly as I expect, and I wonder if this is due to the
> GIL and my use of threads.
>
> Has anyone else dealt with a similar problem?
>
> I have a slightly outdated F90 implementation of the algorithm that
> isn't too far behind its Python cousin. I will try to bring it up to
> date and try the new communication pattern, but it would be nice to
> stay in Python land if possible.
>
> Any suggestions would be appreciated. Thanks,
> Matthew
>
> --
> You received this message because you are subscribed to the Google Groups
> "mpi4py" group.
> To post to this group, send email to mpi4py at googlegroups.com.
> To unsubscribe from this group, send email to
> mpi4py+unsubscribe at googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/mpi4py?hl=en.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20110909/ebe7065c/attachment.html>
More information about the SciPy-User
mailing list