[MATRIX-SIG] reverse of take?

Andrew P. Mullhaupt amullhau@ix.netcom.com
Thu, 03 Jul 1997 18:53:51 -0400


At 11:46 AM 7/3/97 -0400, Aaron Watters wrote:
>
>Yes, and in the "circle" or "fractal border" on byte image
>graphics example you would not get
>much savings in space and also perhaps a speed penalty by saving
>indices versus pointers, because the indices would be completely
>irregular down to the pixel level. 

You may or may not do better with pointers in this case.

If you are allowed to have pointers to arrays as array elements, then
it is usually not necessary to have separate implementation concerns
for indices and pointers - you can control that at the higher level
language where you have more leverage.

> I suppose the 
>question becomes how common is regularity?
>You matrix guys would know better than I.  For graphics type operations
>I can imagine essentially 2 common cases: rectangular subregions (slices as
>they now exist), and more or less complete irregularity.  I can't comment
on other
>possible applications.

Regularity is by far the most common case. Most non-numerical applications
that use matrices as objects are "dense" matrices.

Your version of 'irregularity' corresponds to one style of "sparse" matrices,
and there are many applications for that too. Sometimes the pointers for
sparse matrices are organized into linked lists for each dimension (e.g. a
list of lists by row and/or by columnn).

There are many "structured" sparse cases, with the most important being
band matrices - which are implemented by embedding them in an index-like
storage.

Now in graphics, (where you seem to be coming from) you find both to be
useful. The dense versions are good for image processing and the sparse
versions can be useful for the more computational geometric things.

Long experience in scientific computing and graphics shows that you really
can't settle for "all indices" vs. "all pointers".

There is a reason for this.

As time goes on, computers get bigger, faster, and cheaper. The problems
that people do on the computers correspond to larger and larger sizes "N".
This means that where in the past, an O(N^2) or O(N^3) dense matrix algorithm
with a small constant becomes less attractive than the O(N) or O(N^2)
algorithm for structured matrices with previously prohibitive constants.
This favors the use of flexible pointer-based structures and algorithms.

By the same token, CPUs are becoming more and more superscalar/parallel,
but memory systems are only keeping up in _size_, they are not keeping up
in speed.
This means that instruction bandwidth is becoming free. Memory bandwidth is
becoming more precious. This means that being control bound is less and
less tolerable - if you can't predict the branches and profit by
speculative execution, then you will not get the most of your machine. This
_strongly_ favors the highly regular algorithms and structures of the dense
representation.

What these trends seem to be doing is making it more important to choose
the correct implementation for the problem, and making it harder to "fake"
one style with the other. APL bit the bullet about ten years ago, with the
advent of what I call "modern" APL - where an array can be an element of an
array, which gives you the pointer style operations APL had previously
lacked. Fortran had to bite this bullet at about the same time with F90,
but going in the other direction.

So a good language is really going to want to go both ways. Maple and
Matlab have slightly different approaches to getting this without requiring
any syntax modification.

Python's object orientation is plenty strong enough to handle this sort of
distinction - you can have a class of dense arrays and a class of sparse
arrays and nobody gets hurt.

However, it makes _zero_ sense to have the common type of 2-way linked list
sparse matrices with the current restriction of indexing to slices, and that's
sort of why we're here.

I also want to put some perspective on what is "esoteric" in indexing.
Everything that has been brought up so far is very tame and you can find it
going back at least ten years in several different successful languages.
It's not like there is a lot of technical risk on the table.

_Esoteric_ is what the post-APL languages are getting into such as K where
there is no distinction between array indexing and function calls, so an
expression such as (and I'm sparing everyone the sight of actual K code)

   goo = foo(, , z)

means that

   goo(x, y)

is the same as

    foo(x, y, z)

The K-speakers call this "currying", which I believe is a term originating
in the lambda calculus and the Lisp community.

So far in this thread nobody is proposing anything which isn't extremely
vanilla as far as array languages go.

Later,
Andrew Mullhaupt



_______________
MATRIX-SIG  - SIG on Matrix Math for Python

send messages to: matrix-sig@python.org
administrivia to: matrix-sig-request@python.org
_______________