[PYTHON MATRIX-SIG] Vector-style operations vs. matrix operations
Chris Chase S1A
Tue, 12 Sep 1995 17:05:55 -0400
This is a long post.
Self-introduction: I am an electrical engineer PhD working at the
Applied Physics Lab of Johns Hopkins University. I do a lot of remote
sensor data analysis requiring alot of rapidly developed numerical and
I would like to express some of my opinions regarding the elementwise
versus matrix operators discussion.
Elementwise operations for the proposed matrix objects fall under the
category of vector operations in many math libriaries. When the
matrix object is allocated in a single contiguous block of memory as
in C or Fortran (and has been proposed in this forum) vector
operations view the matrix as a one dimensional vector. These types
of libraries support as a minimum '+','-','*' (and maybe '/') in an
elementwise fashion. Each is invaluable in code that wishes to avoid
Many popular interactive array oriented languages support vector
operations. Below I list how a few of these languages implement
vector and matrix operations.
In Matlab, vector operations (i.e. elementwise) are
called array operations and use the operators "+", "-", ".*" "./" and
".^". "*", "/", and "^" are used for matrix multiplication, right
inverse, and matrix powers, respectively.
In IDL (which I use more frequently than Matlab), "+", "-", "*", and
"/" are all vector operators. "#" is matrix multiplication (actually
"#" is used for Fortran-style arrays and "##" is used for C style
arrays which can be very confusing). There is no operator performing
In Mathematica, elementwise multiplication is a space or "*" and matrix
multiplication is given by the "." operator. There is not a matrix
In APL (http://www.acm.org/sigapl/) "+", "-", "*", and "/" are
vector-style operations. They are even more general in that they can
be applied to specific ranks (as posted by Hinsen Konrad). Special
operators are used for matrix inversion. Matrix multiplication is
expressed using a combination of operators with the inner product
operator (specifically, "+.x"). As much as I like the generality and
completeness of the functional-style operators in APL and its
descendant J, I think that the operator explosion makes the code
extremely unreadable for the uninitiated.
The following are all the same in the use of vector and matrix
operations to Matlab and may be of interest for their other features
octave http://www.che.utexas.edu/octave.html (maintained by the FSF -
the GNU people that produce gcc, g++, emacs, etc.)
or ftp://csi.jpl.nasa.gov/pub/matlab/RLaB/ (also GNU).
tela is tensor oriented (allowing >2 dimensional arrays), so it uses
generalizations of matrix multiplication and inner products and is a
nice improvenment on that point over Matlab.
In short, I agree with the proposal by Hinsen Konrad
<firstname.lastname@example.org>, i.e., "+", "-", "*", and "/" as vector
style operations but applied rank-wise.
I prefer to have arbitrary dimensioned arrays [i.e., tensors or tables
as email@example.com (Hinsen Konrad) refered to them] rather
than only one dimensional vectors or two dimensional matrices.
I prefer that "+", "-", "*", "/" all perform vector style operations.
This makes their behavior consistent among each other. (The posted
comment about "*" not applying to complex valued arrays was
In most of my scientific programming, I encounter these vector style
operators much more often than matrix multiplication and division.
They replace the many "for" loops that are so common in numerical
algorithms. They are displayed compactly and execute much faster than
the loops in an interpreted language.
When using higher dimensional arrays, matrix multiplication is rather
specialized even when generalized to tensors. Matrix multiplication
is actually a specialization of a general inner product (e.g., the
implementation in APL). As such, if matrix multiplication had to have
an operator I would choose a new one. Or, we would be better served
by a generalized inner product operator. But if we don't want to add
new operators I would let "*" be the more common vector (elementwise)
The Matlab approach is good, but it requires additional operators like
".*" and "./". Matlab also suffers from limitations to two dimensions
(this is the main motivation behind the development of Tela). "/"
does not generalize easily to the higher dimensions and the
generalization of "*" is useful but limited when applied (as in Tela)
only along the inner dimensions of two tensors. At the higher
dimensions the elementwise operators applied at specific ranks are
much more common and usefule then a generalized matrix multiplication
operator. If it is desirable to avoid the proliferation of operators
as in APL and J then matrix multiplication and division would be best
left as function calls.
There are additional reasons to advise against a "/" operator for
matrix division. First there is the issue of whether a left or right
inverse is desired (this is handled in Matlab by using two different
operators "/" and "\"). Second, many matrix inversion problems are
best solved using decomposition algorithms, such as LU decomposition
or SVD, that use multiple steps wherein the intermediate results of
the decomposition may saved for solving the same problem (e.g. with
many different parameter matrices.) Additionally, the user often
would like to pick which type of inversion algorithm to use and may
want to override default parameters for these algorithms. I would
prefer that matrix inversion be left to a group of explicit function
I suppose what is implemented for an array extension to Python depends
on the end goal:
1) a Matlab-like array extension for Python is oriented toward
2-dimensional matrix linear algebra. Although limited in scope,
this is useful and in particular allows very compact readable
notation for linear algebra equations. However, it also tends to
be stuck on a row/column view of arrays.
2) an array extension to handle higher dimension arrays (call them
tensors or tables) that can easily handle generalized rank
arithmetic (APL or J style as briefly set forth in Hinsen Konrad's
post) and with specialized functions for matrix multiplication and
division. Generalized inner and outer products (taking user
specified binary operators) would also be very helpful. This view
is more compatible with Jim Fulton's "n-dimensional matrices are
sequences of n-1-dimensional matrices."
In either case, I think it would be a mistake to add too many new
operators in the APL or J fashion. Such clutter would reduce the
readability of Python programs. One also runs the risk of trying to
make a subset of Python look like a functional programming language
which could further confuse users.
The Applied Physics Laboratory
The Johns Hopkins University
Laurel, MD 20723-6099
MATRIX-SIG - SIG on Matrix Math for Python
send messages to: firstname.lastname@example.org
administrivia to: email@example.com