[PYTHON MATRIX-SIG] Python, matrices, and everything

Hinsen Konrad hinsenk@ere.umontreal.ca
Tue, 12 Sep 1995 14:08:18 -0400


[Yet another introduction - skip the next paragraph if you want to
 read something about matrices...]

My name is Konrad Hinsen (*not* Hinsen Konrad, as given in my
address), and I am a computational physicist, currently working
as a postdoc at the University of Montreal. My main field of
work is computer simulations of various kinds, and I use Python
for everything that it not time-critical. My opinion about
Python is a mixture of admiration for its elegant and efficient
structure and frustration about its lack of numerical features,
which is why I joined the Matrix-SIG.

[Here comes the real stuff.]

Before jumping into technical details, it might be a good idea to
define what we are aiming at. As it has already become clear from the
dicussion, matrices are used in two different (but not entirely
distinct) ways:

1) in linear algebra applications
2) as containers of values that are structured as tables

In linear algebra, one needs mainly one- and two-dimensional matrices
of real or complex numbers. Important operations are addition and
multiplication, factorization, inversion, eigenvalues etc. There are
efficient Fortran and C libraries for these operations, so what is
needed is a suitable representation in Python that allows interfacing
to these libraries and some convenient way of accessing them, i.e.
operators (+ - *, maybe / for scalar division) and tons of operations
that are best packaged into a matrix class.

The second application is just as important (in my view) and much more
complex. As APLers know, there are lots of useful operations on tables
of values, which can be of any dimension and contain any data type,
although the numerical are the most important ones (Python already
provides good string handling). It makes sense to have all
mathematical functions and operators acting by default elementwise,
including of course multiplication, but also user-defined functions.
Basically arrays would be extensions of numerical scalars, the
latter becoming arrays of rank zero. The by far most elegant and
powerful array concept I know of is the one implemented in J, a
relatively new language that can be regarded as an improved version
of APL. I certainly don't want to take over J's other features, but
its concept of array and operator ranks is extremely useful. I'll
give a short description below.

But first let's see how the two applications can be reconciled in
a single implementation. Many of the linear algebra operations
make sense only for two-dimensional matrices, but the J approach
of operator rank takes care of the nicely. And even without
that approach, this would only mean some more error checking for
such operations. The only point of conflict is the implementation
of multiplication (there is no difference for addition and
subtraction). Linear-algebra style matrix multiplication is
regarded as an inner product with addition and multiplication
operations from an APL/J point of view, so it is available,
but arguably not in the most convenient form. On the other hand,
once the * symbol means matrix multiplication, the total
general structure of elementwise operations is ruined.

I therefore propose to introduce some other symbol for matrix
multiplication. One could choose some combination like ".*" or "*.",
which looks similar enough to plain multiplication to make its
meaning evident.


And now I'll shortly describe J's rank system. Every object's rank is
identical to the number of its dimensions - a scalar has rank 0, a
vector rank 1, and so on. When used in operations however, a rank N
array is viewed as a rank A array of rank B cells, where A+B=N. A rank
3 array, for example, can be used as a three-dimensional array of
scalars, as a two-dimensional matrix of vectors, as a vector of
two-dimensional matrices, or as a "scalar" containing a
three-dimensional array. How it is used depends on the rank of the
operation acting on it. Most scalar operations have infinite rank,
which means that use their operators at the highest rank
possible. Effectively that means elementwise operation in the
classical sense, except that there are also rules that cover the
combination of object of different rank, such as multiplication of an
array by a scalar. But there are also other operations. Matrix
inversion, for example, by default operates on rank 2 cells.
So if you "invert" a three-dimensional array, this operation
will be interpreted as inversion of each rank-2 cell in the
array. The result is again a rank 3 array.

The beauty of the system is that the rank of each operation can be
changed at will, not only for the built-in operations, but also for
user-defined functions. If, for example, you want to add a rank 2
matrix to each rank 2 cell of a rank 3 array, you just specify rank 2
addition and get what you want. Nothing could be simpler or more
flexible. Unfortunately, I dislike many of J's other features as
a programming language, which is why I am still a Python user...


So that's my point of view in this discussion: let's concentrate
on APLish (or Jish) arrays and then add the operations that are
needed in linear algebra.

-------------------------------------------------------------------------------
Konrad Hinsen                     | E-Mail: hinsenk@ere.umontreal.ca
Departement de chimie             | Tel.: +1-514-343-6111 ext. 3953
Universite de Montreal            | Fax:  +1-514-343-7586
C.P. 6128, succ. A                | Deutsch/Esperanto/English/Nederlands/
Montreal (QC) H3C 3J7             | Francais (phase experimentale)
-------------------------------------------------------------------------------

=================
MATRIX-SIG  - SIG on Matrix Math for Python

send messages to: matrix-sig@python.org
administrivia to: matrix-sig-request@python.org
=================