sturlamolden at yahoo.no
Wed Jan 10 18:15:42 CET 2007
> Thats true. IPC through sockets or (somewhat faster) shared memory - cPickle at least - is usually the maximum of such approaches.
> See http://groups.google.de/group/comp.lang.python/browse_frm/thread/f822ec289f30b26a
> For tasks really requiring threading one can consider IronPython.
> Most advanced technique I've see for CPython ist posh : http://poshmodule.sourceforge.net/
In SciPy there is an MPI-binding project, mpi4py.
MPI is becoming the de facto standard for high-performance parallel
computing, both on shared memory systems (SMPs) and clusters. Spawning
threads or processes is not recommended way to do numerical parallel
computing. Threading makes programming certain tasks more convinient
(particularly GUI and I/O, for which the GIL does not matter anyway),
but is not a good paradigm for dividing CPU bound computations between
multiple processors. MPI is a high level API based on a concept of
"message passing", which allows the programmer to focus on solving the
problem, instead on irrelevant distractions such as thread managament
Although MPI has standard APIs for C and Fortran, it may be used with
any programming language. For Python, an additional advantage of using
MPI is that the GIL has no practical consequence for performance. The
GIL can lock a process but not prevent MPI from using multiple
processors as MPI is always using multiple processes. For IPC, MPI will
e.g. use shared-memory segments on SMPs and tcp/ip on clusters, but all
these details are hidden.
It seems like 'ppsmp' of parallelpython.com is just an reinvention of a
small portion of MPI.
More information about the Python-list