
On Feb 17, 2005, at 1:52 PM, Travis Oliphant wrote:
I'm glad to get the feedback.
1) Types
[...]
One thing has always bothered me though. Why is a double complex type Complex64? and a float complex type Complex32. This seems to break the idea that the number at the end specifies a bit width. Why don't we just call it Complex64 and Complex128? Can we change this?
My recollection is that is how we originally did it until we saw how it was done in Numeric, but maybe my memory is screwed up. I'm happy with real bit widths.
Problems also exist when you are interfacing with hardware or other C or Fortran code. You know you want single-precision floating point. You don't know or care what the bit-width is. I think with the Integer types the bit-width specification is more important than floating point types. In sum, I think it is important to have the ability to specify it both ways.
I'd agree that supporting both is ideal. Sometimes you really want a specific bit-width and don't care what the platform default is, and others you are tied to the platform default for one reason or another.
3) Always returning rank-0 arrays.
This may be a bit controversial as it is a bit of a change. But, my experience is that quite a bit of extra code is written to check whether or not a calculation returns a Python-scalar (because these don't have the same methods as arrays). In particular len(a) does not work if a is a scalar, but len(b) works if b is a rank-0 array (numeric scalar). Rank-0 arrays are scalars. When Python needs a scalar it will generally ask the object if it can turn itself into an int or a float. A notable exception is indexing in a list (where Python needs an integer and won't ask the object to convert if it can). But int(b) always returns a Python integer if the array has only 1 element. I'd like to know what reasons people can think of for ever returning Python scalars unless explicitly asked for.
I'm not sure this is an important issue for us (either way) so long as the overhead for rank-0 arrays is not much higher than for scalars (for numarray it was an issue). But there are those that argue (Konrad as an example if I remember correctly) that the definitions of rank and such mean len(rank-0) should not be 1 and that one should not be able to index rank-0 arrays. I know that the argument has been made that this helps support generic programming (not having to check between scalars and arrays), but every time I ask for specific examples I've found that there are simple alternatives to solve this problem or that type checks are still necessary because there is no control over what users may supply as arguments. If this is the reason, could it be motivated with a couple examples to show why it is the only reasonable alternative? (Then you can use it to slay all subsequent whiners). Perry