[Chicago] MPI in Python?
steder at gmail.com
Wed Feb 27 18:27:31 CET 2008
<chicago at python.org>
> Is anyone using MPI within Python to harness the compute power of multiple
> CPU machines? I see two MPI packages available for Python:
PyMPI is an older but not really maintained project. PyMPI is implemented
as a different interpreter. PyMPI does not support the parallel IO
operations that MPI4py does. However, PyMPI has been used in "production"
code at Lawrence Livermore National Labs and so it is definitely a heavily
used and abused implementation with reasonably good performance
characteristics. However, given that it is definitely more out of date now
it probably is not the best choice for someone looking to pick up MPI and
Python now. MPI4py is a more up-to-date implementation that is heavily
influenced by the MPI-2 standard and the C++ bindings for that standard.
MPI4py is also just a simple module that is imported.
Initialization of the MPI environment happens when the interpreter is
started in PyMPI which can be a problem for programs that involve multiple
MPI processes collaborating on a single problem. MPI4py I assume
initializes the MPI environment when the module is imported as I've seen
that as a common idiom.
As I'm not a terribly big fan of implementing a whole bunch of logic in C I
actually have YAMPII at http://code.google.com/p/maroonmpi/ that attempts to
minimize the amount of C code necessary to write a useful MPI program in
Python. It consists of a simple C module like MPI4py but has explicit
initialization and finalization calls. It's still a very simple
implementation of just the core MPI-1 functionality (along with the MPE
library for logging and visualization support). Personally, I would go for
either MPI4py, or if you are interested in extending and modifying the core
of the functionality, I would recommend my MaroonMPI implementation as it is
simpler, written entirely in ANSI C and Python, and quite straightforward.
Additionally, I've tested MaroonMPI on various beowulf clusters at the
University of Chicago, Argonne National Lab, and the University of Texas (
It has been at least 6 months since I last used either PyMPI or MPI4py so I
apologize if I've misrepresented either of those projects here. I think
both are solid implementations, but for my purposes they were inadequate.
Specifically, I was interfacing 4 parallel climate models written in 3
programming languages. Kind of an arcane scenario :-D
> Any idea which of these is "better" for some vague definition of the
> Is MPI too low-level? Is there something higher-level on top of it?
> been playing around with Richard Oudkerk's processing package (see entry
> PyPI: http://pypi.python.org/pypi/processing). That provides an
> very much like Python's threading package, which makes it pretty easy to
> use. It's not based on MPI though. A marriage of the two might be nice.
So in terms of better, I'd lean heavily towards MPI4py or a simpler module
The beauty of something like threading is that we can dynamically create and
delete threads. MPI-1 uses a static process model so you have to determine
the number of processes when you start the process.
A simple MaroonMPI program:
rank,size = mmpi.init()
print "Hello World from Process %s of %s" %(rank,size)
Is run by typing:
$ mpirun -n 2 /path/to/python /path/to/hello-mpi.py
And the result should be something like:
Hello World from Process 0 of 2
Hello World from Process 1 of 2
Anyway, the static process space is no longer an issue in the MPI-2 spec,
but it is still something that would have to be managed and I don't believe
there are any Python+MPI implementations that bother to do that. Maybe the
IPython1 implementation will handle this kind of thing.
MPI is definitely a low level parallel library but it's also a standard in
many scientific communities. However, depending on the work you need to do
I imagine that the parallelization provided by the pypi Processing package
is perfectly reasonable. Perhaps with more knowledge of the particular
problem you're tackling we could look at creating a benchmark suite and
compare Processing to an MPI implementation.
> Any thoughts?
I realize that's a long rambling reply to a simple question but I hope it's
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Chicago