[PYTHON MATRIX-SIG] Final matrix object renaming and packaging

Hinsen Konrad hinsenk@ere.umontreal.ca
Tue, 16 Jan 1996 12:10:28 -0500


   Hi all.  I just got back in town from an extended Christmas vacation
   (I also got married, so I had a good excuse for being gone so long).

Congratulations!

   1) Konrad's numeric patches to the python core (incorporated into the
   working version of python by Guido) will be required and included with
   the distribution.  These will be the only patches to the python core
   required.

There has been a slight modification in the meantime (there will be
only 'j' for imaginary constants, not 'i') and two corrections that
perhaps not everybody has received. I'll prepare a new set of patch
files for the beta release.

   This PyArray C type will not implement automatic type-coercion (unlike the
   current implementation).  The reason for this is that I have decided
   type-coercion is a major pain for arrays of floats (as opposed to
   doubles) and I happen to use arrays of floats for most of my work.  If
   somebody can give me a good suggestion for how to keep automatic
   type-coercion and have Matrix_f(1,2,3)*1.2 == Matrix_f(1.2,2.4,3.6)
   then I might consider changing this decision.  See later note on
   Array.py for an alternative.

One way to solve your "float" problem would be to write
   Matrix_f(1,2,3)*Matrix_f(1.2),
i.e. cast 1.2 explicitly to float. I don't see how you can avoid
that cast anyway; without type coercion, Matrix_f(1,2,3)*1.2 would
be an error.

I'd prefer to have type coercion everywhere for consistency.  If the
"high-level" classes Array and Matrix are really fast enough (about
which I have doubts, see below), then it doesn't matter that much, but
in general I'd give top priority to consistency and put the burden of
more complicated coding on those who want top speed (which of course
must be possible).

   3) Two dynamically linkable modules called "umathmodule.c", and "ieee_umathmodule.c"

I don't think that using "ieee" in the name is a good idea. After all,
there is no guarantee that this will really use IEEE arithmetic (a
Vax, for example, doesn't have IEEE arithmetic hardware). Why not
just "fast_umath"?

   4) Two python objects, "Array.py" and "Matrix.py"

   Array is essentially a python binding around the underlying C type,
   and this will also provide for automatic type-coercion and will
   generally assume that it is only working with arrays of type long,
   double, and complex double (the three types of arrays that are
   equivalent to python objects).  In my initial tests of this object on
   LLNL's simple benchmark, I found that the performance was only 5%
   slower than using the C object directly.

Nevertheless the Python wrapper could cause a significant overhead
for small arrays, which will also be used heavily. For example,
I am planning to change my vector class (3d vectors for geometry
etc.) to use an array of length 3 as its internal representation.
Do you have any data on the speed penalty for such small arrays?

   Matrix will inherit almost everything from Array, however it will be
   limited to 2 or fewer dimensions, and m1*m2 where m1 and m2 are
   matrices will perform matrix style multiplication.  If the linear
   algebra people would like, other changes can be made (ie. ~m1 ==
   m1.transpose(), ...).  Based on the experiments with Array, the
   performance penalty for this approach should be minimal.

I suppose some discussion will be necessary about the details of
this class, which until now didn't exist. Given that many people
seem to prefer a distinction between row and column vectors, it
would be a good idea to include separate constructors for them.

   5) A standard library "Numeric.py" which will be the standard way of
   importing multiarray, Array, Matrix, umath, etc.  It will include the
   inverted trig functions ("sec", "csc", "arcsec", ...) as well as most
   of the standard functions currently in Matrix.py

I am not sure it is a good idea to have such an all-encompassing
module. It would contain a rather arbitrary selection of things,
i.e. those which exist at the time it is introduced. I expect
that other numerical modules will appear later. So it would be
better to group things according to functions or applications.

   9?) A "numericmodule.c" which contains reasonably efficient fft,
   matrix inversion, convolution, random numbers, eigenvalues and
   filtering functions stolen from existing C code so that the package
   can be viewed as a "complete" numerical computing system?

Again this should be several modules: linear algebra, random
numbers, fft/convolution. We won't achieve anything "complete"
anyway, so let's not pretend we do.


Otherwise, your proposal sounds fine to me.

Konrad.

-------------------------------------------------------------------------------
Konrad Hinsen                     | E-Mail: hinsenk@ere.umontreal.ca
Departement de chimie             | Tel.: +1-514-343-6111 ext. 3953
Universite de Montreal            | Fax:  +1-514-343-7586
C.P. 6128, succ. Centre-Ville     | Deutsch/Esperanto/English/Nederlands/
Montreal (QC) H3C 3J7             | Francais (phase experimentale)
-------------------------------------------------------------------------------

=================
MATRIX-SIG  - SIG on Matrix Math for Python

send messages to: matrix-sig@python.org
administrivia to: matrix-sig-request@python.org
=================