[Chicago] MPI in Python?
mtobis at gmail.com
Wed Feb 27 21:38:06 CET 2008
In high-performance scientific computing MPI is the standard solution
for distributed memory systems.
I had understood OpenMP to be a solution for shared memory parallelism
of certain sorts. Look at it carefully to see if your programming
model matches it; my guess is that for most on this list it won't even
If there is a **distributed memory** version of OpenMP, it would seem
to be more of a toy than a real solution. Maybe it's useful for
porting existing OpenMP work to distributed memory?
That's consistent with what Massimo said. Massimo said nothing about
not using MPI. Your summary seems wrong.
I think you want to build a threadlike environment on top of MPI on
top of ethernet or high performance switches. That seems plausible and
useful. You could even build duck-typed threadlike objects.
As Mike S said, you can also build MPI on top of threads. That's a
different prospect. (If you succeed in your mission you could build
threads on top of MPI and implement MPI with threads as a stunt or a
On Wed, Feb 27, 2008 at 2:15 PM, <skip at pobox.com> wrote:
> Massimo> Is the CPU don't share the same RAM threading is not an option
> Massimo> unless you use OpenMP, which emulates shared memory but it is
> Massimo> very slow compared with MPI. Nobody uses it anymore.
> Skip> So MPI means "Message Passing Interface" where the messages must
> Skip> be passed via shared memory? *sigh* If so, I'll look elsewhere
> Skip> for solutions.
> Mike> MPI means "Message Passing Interface" where you shouldn't have to
> Mike> worry about the protocol used. MPI will transparently use shared
> Mike> memory, pthreads, some other ipc mechanism, or sockets based on
> Mike> the configuration when you built the MPI library and/or what is
> Mike> the best supported mechanism on your operating system.
> So I obviously misunderstood what Massimo wrote. What did he mean? I
> interpreted his statement as meaning you can't use MPI without a shared
> memory architecture unless you are using OpenMP which nobody uses anymore.
> Chicago mailing list
> Chicago at python.org
More information about the Chicago