Parallel Python

Nick Maclaren nmm1 at
Wed Jan 10 18:40:49 CET 2007

In article <1168449342.414838.181050 at>,
"sturlamolden" <sturlamolden at> writes:
|> MPI is becoming the de facto standard for high-performance parallel
|> computing, both on shared memory systems (SMPs) and clusters.

It has been for some time, and is still gaining ground.

|> Spawning
|> threads or processes is not recommended way to do numerical parallel
|> computing.

Er, MPI works by getting SOMETHING to spawn processes, which then
communicate with each other.

|> Threading makes programming certain tasks more convinient
|> (particularly GUI and I/O, for which the GIL does not matter anyway),
|> but is not a good paradigm for dividing CPU bound computations between
|> multiple processors. MPI is a high level API based on a concept of
|> "message passing", which allows the programmer to focus on solving the
|> problem, instead on irrelevant distractions such as thread managament
|> and synchronization.

Grrk.  That's not quite it.

The problem is that the current threading models (POSIX threads and
Microsoft's equivalent) were intended for running large numbers of
semi-independent, mostly idle, threads: Web servers and similar.
Everything about them, including their design (such as it is), their
interfaces and their implementations, are unsuitable for parallel HPC
applications.  One can argue whether that is insoluble, but let's not,
at least not here.

Now, Unix and Microsoft processes are little better but, because they
are more separate (and, especially, because they don't share memory)
are MUCH easier to run effectively on shared memory multi-CPU systems.
You still have to play administrator tricks, but they aren't as foul
as the ones that you have to play for threaded programs.  Yes, I know
that it is a bit Irish for the best way to use a shared memory system
to be to not share memory, but that's how it is.

Nick Maclaren.

More information about the Python-list mailing list