Python/SWIG: Namespace collisions in multiple modules.

Konrad Hinsen hinsen at cnrs-orleans.fr
Tue May 23 05:25:54 EDT 2000


Jakob Schiotz <schiotz1 at yahoo.com> writes:

> 2) Using Scientific.MPI as it is.  I am afraid I am going to miss some
> functionality, such as nonblocking send/receive, and reduce operations.

Those will be certainly be added in the course of time, the module
is in its very early stage of development.

> 3) Extend Scientific.MPI with those functions.  In that case I'll of
> course send whatever I do to you.

Sounds ideal from my point of view ;-)

> The functions in the C API are called PyMPI_XXXX, some of them are just
> wrappers around the usual MPI_XXXX functions (but with some reordering
> of the argument list).  If I make direct calls to MPI from a compiled
> library, do I have to go through these calls, or can I call MPI_XXXX
> directly?

As far as ScientificPython is concerned, you can mix its C API
functions with direct MPI calls without problems; there is no state
information stored in the communicator objects other than run-time
constants (size and rank).

However, if you do make direct MPI calls, it could be a challenge to
find a working linking arrangement, especially if your goal is a
portable program meant for public distribution. I included the
low-level C API functions in ScientificPython precisely to provide a
portable interface for use by dynamic libraries. On the other hand, if
it works on your machine and you don't care much about portability, go
ahead.

> macros).  I understand that the C API functionality is nessecary if the
> MPI module is dynamically loaded, but if I build a static executable
> with Scientific.MPI as you recommend, can I then use the "ordinary" MPI
> calls?

The static executable does not necessarily include all of MPI, only
the MPI functions called from ScientificPython are guaranteed to be
included. And the symbols may or may not be visible to shared
libraries, depending on your platform.

A safe setup would be to link all modules that call MPI functions
statically to the Python executable. But then you lose all those nice
features of shared libraries.

> You have chosen to make all MPI functions methods of the communicator
> in Python.  Is there any compelling reason for doing this, apart from
> "being object oriented"?  Not that I have anything against it, but I
> was wondering.

The main reason was "doing it the Python way". Implementing everything
as methods on communicator objects also opens the possibility of
implementing an interface compatible message passing system based on
some other library, but at the moment there doesn't seem much of
an alternative to MPI.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen at cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------



More information about the Python-list mailing list