[IPython-dev] minimal mpi4py notebook example

Fernando Perez fperez.net at gmail.com
Mon Sep 19 22:22:45 EDT 2011


Hi Chris,

responding on-list so this feedback benefits other users as well...

On Mon, Sep 19, 2011 at 1:41 PM, Chris Kees <cekees at gmail.com> wrote:
>
> I think I can do it either way, but having the notebook be the rank-0 node is probably the most natural for me, since my experience is primarily with SIMD programming. Image files are composited and written to disk by the rank-0 process, so I think they would automatically show up in the notebook (or be easy to fit into the display protocol).   On the other hand, I can encapsulate the entire parallel program in a function call, so my guess is I can do it either way. I'm not sure it's useful, but below I've included the entire program  that needs to run either a) in a python interpreter that has already called MPI init or b) in one that was started with mpiexec.

Well, after writing my email I realized that it's not so easy right
now to have your notebook kernel be one of the mpi ones.  There's no
fundamental problem with the architecture, it's just a bit of missing
code: right now kernels are started by the notebook server process,
which then opens up a web connection on one side that maps to that
kernel on the other side.  We haven't added the ability to create a
web notebook connected to an *existing kernel* based on its port
information (like you can wire a qt console to an existing kernel).
So the mpi processes could start a kernel all they want, there's no
way right now to control that kernel via a notebook (though you could
control it via a qtconsole, if you print/save the port data
somewhere).

If, with more experience, we find this to really be a necessary use
case we can certainly implement things: all the architecture is
designed so it could work, it's just a little bit of missing glue in
places.

But in the meantime, I've written up (attached) two notebooks for
illustrating simple MPI use.  The first is really the kind of code
you'd run normally in a terminal, but I put it in a notebook to show
that it can be a way for your users to start things if they are a bit
terminal-phobic.  They can use that notebook as a poor-man's terminal,
and the 'busy' indicator will be on as long as the cluster is active.
They can just stop it with the 'kernel interrupt' button, which
effectively sends Ctrl-C.  The same goal can be obtained with a screen
session into the server (more robust than a simple ssh login), but
this makes it nicely sit in your open notebooks next to the real work.

The 'real work' is in the 2nd notebook, which just illustrates some
basic communication with the engines and uses %autopx with the
convenience of whole-cell editing to send them a little bit of MPI
code (taken from the mpi4py docs).

Let me know if this helps, and if so, we can put it in the docs.

Cheers,

f
-------------- next part --------------
A non-text attachment was scrubbed...
Name: parallel_ipcluster.ipynb
Type: application/octet-stream
Size: 3344 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20110919/ebe8a1cf/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: parallel_mpi.ipynb
Type: application/octet-stream
Size: 6513 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20110919/ebe8a1cf/attachment-0001.obj>


More information about the IPython-dev mailing list