[PYTHON MATRIX-SIG] Comments on 'a few thoughts'

Michael McLay mclay@eeel.nist.gov
Fri, 20 Oct 95 12:15:01 EDT


P. Dubois writes:

> 1. Usage of the matrix operators is perhaps 1% or less of the usage of
> the element-wise numbers. This is because when two-dimensional
> matrices do arise, they usually represent the spatial or other type of
> discretization, far more often than they represent operators.  If I
> had it to do over again, I would not have special operators, just a
> function call, since the light usage is not worth the trouble. 

Wouldn't it be appropriate to make them methods instead of functions?

I agree that a good design rule would be to not implement symbolic
operators for rarely used operations.  Unfortunately convincing
someone who is creating an operator that it is truly rare may not be
easy.  The goal of this design rule is to ensure code is readable.
Readability is dependent on the vocabulary of the audience.  If code
is only to be read by a small group who have agreed upon a
standardized shorthand then overloading symbols is Ok.  However, to
reach the broadest audience the use of functions or methods is
necessary in order for the code to be unambiguous to the human parsing
the source.  Granted, it will make the application source code a
little wordy, but that is the price to be paid for clarity.  Giving
programmers the unrestricted ability to overload operators can be
dangerous since it tends to lead to the development of obscure
dialects in notation.

While I'm on the subject of design rules...  Perhaps the Mathematica
convention of spelling out the full name of everything instead of
choosing arbitrary abbreviations should also be adopted.  This would
be consistent with the Python tradition of making the source code
readable to others.  A simple example will illustrate.  How would you
interpret the following expression.

	speed = m/hr

In scanning the text can you be sure if this is miles/hour or
meters/hour?  I'd also suggest that only SI units be used in setting
up libraries.  It is simple enough to do unit conversion at the user
interface.  

> 3. I think it is mistaken to try to reduce the implementor's job by
> doing many types in one like the "array" built-in object does.  
> Having basic double/integer/complex stuff work fast should be the
> primary consideration, even if it means some tedious and not terribly
> elegant coding. 

> 5. One might want to consider having a very fast, very raw vector class
> on which to base higher level classes that have concepts like shape, etc.

This proposal suggests adding many specialized numerical types to
Python.  Each type would be tuned for efficient performance in solving
a specific problem.  From an implantation viewpoint this should not be
difficult to put in place.  Each new numeric type would be implemented
as a dynamically linked module.  This solution may be inevitable.
Each application domain will by necessity build an efficient set of
types required for the application domain's calculations.

This approach to implementation  be a pragmatic necessity since at
the implementation level the computational requirements will demand
that all operations on large data sets run at the speed of compiled
code.  Dividing the problem into discrete, importable modules
compartmentalizes the work.  Each numeric type module can
independently be implement.  All the operations needed for a numeric
type would be incorporated into the module.

This proposal just solves the easy part of the problem.

Hinsen Konrad writes:
> When I made this suggestion, I referred to modern implementations of
> APL (including J), which in fact have many internal representations
> for numbers, for efficiency reasons. A typical APL implementation
> has
> 1) Bits
> 2) small integers (i.e. bytes)
> 3) long integers (4 bytes)
> 4) real numbers
> 5) complex numbers
> But to the user all this looks like a single number type, since all
> conversions happen automatically. The price to pay is not in efficiency
> (internal APL operations tend to outperform Fortran), but in a
> rather complex implementation, which has to decide the optimal
> data type based on various criteria. For example, the high cost
> of unpacking bit arrays means that they will be used only for
> very large objects and/or when memory runs down.
> 
> The advantage of this is that numbers behave like you would expect
> from mathematics, e.g. 1/3 equals 1./3., not 0. This prevents many
> errors. There are actually more user-friendly features like this in
> APL, e.g. non-zero comparison tolerance.

This is the hard part of the problem to solve. 

Providing automatic type conversion would be a great feature and would
help reduce the number of bugs and the complexity of applications.
The only hitch is that it may take a significant effort to create a
working implementation.  Assuming that the first solution is
inevitable.  That is, people will write point solutions to solve their
problems.  Is there something that would prevent the addition of
automatic type conversion from being implemented as a layer on top of
the numeric type modules that will be created independently?  What
rules need to be established be ensure that the initially independent
numeric types can be integrated into the more elegant solution?

> Actually, my dream language would also handle symbolic operations
> in the style of Mathematica or Maple...

Yes, and one of the features from Mathematica that is missing is the
ability to tag objects with symbols that represent units of measure.
In Mathematica you can do the following:

	In[1]: 12 meters

The meters symbol tags the integer 12 as its unit of measure.  

Michael

=================
MATRIX-SIG  - SIG on Matrix Math for Python

send messages to: matrix-sig@python.org
administrivia to: matrix-sig-request@python.org
=================