Discussion: new operators for numerical computation

Konrad Hinsen hinsen at cnrs-orleans.fr
Fri Jul 21 05:53:36 EDT 2000


Travis Oliphant <olipt at mayo.edu> writes:

> The only question is can the (*) be implemented in the current grammar?  

I see no reason why not. It's no different from other multi-character
operators (!=, <>, **).

> I don't think we should start introducing [*] {*} variants.  That would
> likely weaken our case in the PEP.  

Exactly. In the heat of an active discussion we should not overlook
that we (numerical/scientific users of Python) are a minority in the
Python community. Moreover, our additional operators are not likely to
be of much interest to others. We'll have a hard time arguing for
*any* additional operators, so we should be careful to propose the
smallest reasonable list and avoid anything that could affect other
users in a negative way.

Python esthetics is also an important point, although difficult to
cast into rules. Many pythonists have strong objections against
Perl-like "line noise syntax". What makes Python syntax particularly
clear is the close resemblance to pseudo-code writing style; there are
very few features that one would not use when writing illustrative
pseud-code. I don't think any proposal introducing @ into matrix
operators has a serious chance to get accepted.

The same goes for .* or *. because of the conflict with floating-point
notation. We might come up with a unique rule to disambiguate the
syntax from the parser's point of view, but it will always remain
confusing to users, and there is also a risk of breaking existing
code.

Another point to consider is implementation. In the current Python
implementation, every operator must be implemented by a special method
(or subroutine for C types). This can quickly become messy. There are
other possible arrangements (e.g. all matrix operations could be
mapped to a.__matrix_op__(b, op_code)), but it is in any case better
to keep the number of new operators to a minimum.

It is also worth reminding everyone that we are not discussing the
addition of new features that were not available before. It's only new
syntax for operations that already exist. Unless the operation is
really a frequent one, there is no point in adding operators.

Therefore I agree with Travis that we should probably not ask for
more than the following ones:

> (*)  outer product
> (.)  inner product
> (|)  matrix solve
> (**) matrix power  
>      (^) is better but only if ^ is power (which is not likely!)

For "matrix solve", I propose to follow APL and define it in the sense
of a least-squares solution, implemented via singular value
decomposition (SVD). There are two reasons for this:

1) SVD is stable even for near-singular (or downright singular!) matrices.
   This is one of the few fool-proof algorithms in linear algebra,
   and it spares unexperienced users a lot of trouble.

2) Linear least-squares fits are a sufficiently frequent operation
   in their own right, especially in data analysis tasks.

The only disadvantage is that SVD is slower than a standard solve
algorithm, which however could still be available as a function in
some module.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen at cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------



More information about the Python-list mailing list