[Numpy-discussion] Re: [Matrix-SIG] An Experiment in code-cleanup.

Travis Oliphant Oliphant.Travis at mayo.edu
Tue Feb 8 12:38:26 EST 2000


> > 3) Facility for memory-mapped dataspace in arrays.
> 
> I'd really like to have that...

This is pretty easy to add but it does require some changes to the
underlying structure, So you can expect it.
> 
> > 4) Slices become copies with the addition of methods for current strict
> > referencing behavior.
> 
> This will break a lot of code, and in a way that will be difficult to
> debug. In fact, this is the only point you mention which would be
> reason enough for me not to use your modified version; going through
> all of my code to check what effect this might have sounds like a
> nightmare.

I know this will be a sticky point.  I'm not sure what to do exactly, but
the current behavior and implementation makes the semantics for slicing an
array using a sequence problematic since I don't see a way to represent a
reference to a sequence of indices in the underlying structure of an
array. So such slices would have to be copies and not references, which
makes for an inconsistent code.  

> 
> I see the point of having a copying version as well, but why not
> implement the copying behaviour as methods and leave indexing as it
> is?

I want to agree with you, but I think we may need to change the behavior
eventually so when is it going to happen?

> 
> > 5) Handling of sliceobjects which consist of sequences of indices (so that
> > setting and getting elements of arrays using their index is possible). 
> 
> Sounds good as well...

This facility is already embedded in the underlying structure.  My plan is
to go with the original idea that Jim Hugunin and Chris Chase had for
slice objects.   The sliceobject in python is already general enough for
this to work.

> 
> > 6) Rank-0 arrays will not be autoconverted to Python scalars, but will
> > still behave as Python scalars whenever Python allows general scalar-like
> > objects in it's operations.  Methods will allow the
> > user-controlled conversion to the Python scalars.  
> 
> I suspect that full behaviour-compatibility with scalars is
> impossible, but I am willing to be proven wrong. For example, Python
> scalars are immutable, arrays aren't. This also means that rank-0
> arrays can't be used as keys in dictionaries.
> 
> How do you plan to implement mixed arithmetic with scalars? If the
> return value is a rank-0 array, then a single library returning
> a rank-0 array somewhere could mess up a program well enough that
> debugging becomes a nightmare.
>

Mixed arithmetic in general is another sticky point.  I went back and read
the discussion of this point which occured 1995-1996.  It was very
interesting reading and a lot of points were made.  Now we have several
years of experience and we should apply what we've learned (of course
we've all learned different things :-) ).  

Konrad, you had a lot to say on this point 4 years ago.  I've had a long
discussion with a colleague who is starting to "get in" to Numerical
Python and he has really been annoyed with the current mixed arithmetic
rules.  The seem to try to outguess the user.  The spacesaving concept
helps, but it still seem's like a hack to me.

I know there are several opinions, so I'll offer mine.  We need 
simple rules that are easy to teach a newcomer.  Right now the rule is
farily simple in that coercion always proceeds up.  But, mixed arithmetic
with a float and a double does not produce something with double
precision -- yet that's our rule.  I think any automatic conversion should
go the other way.  

Konrad, 4 years ago, you talked about unexpected losses of precision if
this were allowed to happen, but I couldn't understand how.  To me, it is
unexpected to have double precision arrays which are really only
carrying single-precision results.  My idea of the coercion hierchy is 
shown below with conversion always happening down when called for.  The
Python scalars get mapped to the "largest precision" in their category and
then normal coercions rules take place.  

The casual user will never use single precision arrays and so will not
even notice they are there unless they request them.   If they do request
them, they don't want them suddenly changing precision.  That is my take
anyway.

Boolean 
Character
Unsigned
        long
	int
	short
Signed
	long
	int 
	short
Real
	/* long double */
	double
	float
Complex
	/* __complex__ long double */
	__complex__ double
	__complex__ float
Object

> > 7) Addition of attributes so that different users can configure aspects of
> > the math behavior, to their hearts content.
> 
> You mean global attributes? That could be the end of universally
> usable library modules, supposing that people actually use them.

I thought I did, but I've changed my mind after reading the discussion in
1995.  I don't like global attributes either, so I'm not going there.

> 
> > If their is anyone interested in helping in this "unofficial branch
> > work" let me know and we'll see about setting up someplace to work.  Be
> 
> I don't have much time at the moment, but I could still help out with
> testing etc.

Konrad you were very instrumental in getting NumPy off the ground in the
first place and I will always appreciate your input.





More information about the NumPy-Discussion mailing list