PEP 209: Multi-dimensional Arrays

The first draft of PEP 209: Multi-dimensional Arrays is ready for comment. It's primary emphasis is aimed at array operations, but its design is intended to provide a general framework for working with multi-dimensional arrays. This PEP covers a lot of ground and so does not go into much detail at this stage. The hope is that we can fill them in as time goes on. It also presents several Open Issues that need to be discussed.
Cheers, Paul

Some random PEP talk.
"PB" == Paul Barrett Barrett@stsci.edu writes:
PB> 2. Additional array types
PB> Numeric 1 has 11 defined types: char, ubyte, sbyte, short, int, PB> long, float, double, cfloat, cdouble, and object. There are no PB> ushort, uint, or ulong types, nor are there more complex types PB> such as a bit type which is of use to some fields of science and PB> possibly for implementing masked-arrays.
True: I would have had a much easier life with a ushort type.
PB> Its relation to the other types is defined when the C-extension PB> module for that type is imported. The corresponding Python code PB> is:
>> Int32.astype[Real64] = Real64
I understand this is to be done by the Int32 C extension module. But how does it know about Real64?
PB> Attributes: PB> .name: e.g. "Int32", "Float64", etc. PB> .typecode: e.g. 'i', 'f', etc. PB> (for backward compatibility)
.typecode() is a method now.
PB> .size (in bytes): e.g. 4, 8, etc.
"element size?"
add.register('add', (Int32, Int32, Int32), cfunc-add)
Typo: cfunc-add is an expression, not an identifier.
An implementation of a (Int32, Float32, Float32) add is possible and desirable as mentioned earlier in the document. Which C module is going to declare such a combination?
PB> asstring(): create string from array
Not "tostring" like now?
PB> 4. ArrayView
PB> This class is similar to the Array class except that the reshape PB> and flat methods will raise exceptions, since non-contiguous PB> arrays cannot be reshaped or flattened using just pointer and PB> step-size information.
This was completely unclear to me until here. I must say I find this a strange way of handling things. I haven't looked into implementation details, but wouldn't it feel more natural if an Array would just be the "data", and an ArrayView would contain the dimensions and strides. Completely separated. One would always need a pair, but more than one ArrayView could use the same Array.
PB> a. _ufunc:
PB> 1. Does slicing syntax default to copy or view behavior?
Numeric 1 uses slicing for view, and a method for copy. "Feeling" compatible with core python would require copy on rhs, and view on lhs of an assignment. Is that distinction possible?
If copy is the default for slicing, how would one make a view?
PB> 2. Does item syntax default to copy or view behavior?
view.
PB> Yet, c[i] can be considered just a shorthand for c[i,:] which PB> would imply copy behavior assuming slicing syntax returns a copy.
If you reason that way, then c is just a shorthand for c[...] too.
PB> 3. How is scalar coercion implemented?
PB> Python has fewer numeric types than Numeric which can cause PB> coercion problems. For example when multiplying a Python scalar PB> of type float and a Numeric array of type float, the Numeric array PB> is converted to a double, since the Python float type is actually PB> a double. This is often not the desired behavior, since the PB> Numeric array will be doubled in size which is likely to be PB> annoying, particularly for very large arrays.
Sure. That is handled reasonably well by the current Numeric 1.
To extend this, I'd like to comment that I have never really understood the philosophy of taking the largest type for coercion in all languages. Being a scientist, I have learned that when you multiply a very accurate number with a very approximate number, your result is going to be very approximate, not very accurate! It would thus be more logical to have Float32*Float64 return a Float32!
PB> In a future version of Python, the behavior of integer division PB> will change. The operands will be converted to floats, so the PB> result will be a float. If we implement the proposed scalar PB> coercion rules where arrays have precedence over Python scalars, PB> then dividing an array by an integer will return an integer array PB> and will not be consistent with a future version of Python which PB> would return an array of type double. Scientific programmers are PB> familiar with the distinction between integer and float-point PB> division, so should Numeric 2 continue with this behavior?
Numeric 2 should be as compatible as reasonably possible with core python. But my question is: how would we do integer division of arrays? A ufunc for which no operator shortcut exists?
PB> 7. How are numerical errors handled (IEEE floating-point errors in PB> particular)?
I am developing my code on Linux and IRIX. I have seen that where Numeric code on Linux runs fine, the same code on IRIX may "core dump" on a FPE (e.g. arctan2(0,0)). That difference should be avoided.
PB> a. Print a message of the most severe error, leaving it to PB> the user to locate the errors.
What is the most severe error?
PB> c. Minimall UFunc class:
Typo: Minimal?
Regards,
Rob Hooft

Rob W. W. Hooft writes:
Some random PEP talk.
"PB" == Paul Barrett Barrett@stsci.edu writes:
PB> Its relation to the other types is defined when the C-extension PB> module for that type is imported. The corresponding Python code PB> is:
>> Int32.astype[Real64] = Real64
I understand this is to be done by the Int32 C extension module. But how does it know about Real64?
This approach assumes that there are a basic set of predefined types. In the above example, the Real64 type is one of them. But let's consider creating a completely new type, say Real128. This type knows its relation to the other previously defined types, namely Real32, Real64, etc., but they do not know their relationship to it. That's still OK, because the Real128 type is imbued with this required information and is willing to share it with the other types.
By way of bootstrapping, only one predefined type need be known, say, Int32. The operations associated with this type can only be Int32 operations, because this is the only type it knows about. Yet, we can add another type, say Real64, which has not only Real64 operations, BUT also Int32 and Real64 mixed operations, since it knows about Int32. The Real64 type provides the necessary information to relate the Int32 and Int64 types. Let's now add a third type, then a fourth, etc., each knowing about its predecessor types but not its successors.
This approach is identical to the way core Python adds new classes or C-extension types, so this is nothing new. The current types do not know about the new type, but the new type knows about them. As long as one type knows the relationship between the two that is sufficient for the scheme to work.
PB> Attributes: PB> .name: e.g. "Int32", "Float64", etc. PB> .typecode: e.g. 'i', 'f', etc. PB> (for backward compatibility)
.typecode() is a method now.
Yes, I propose that it become a settable attribute.
PB> .size (in bytes): e.g. 4, 8, etc.
"element size?"
Yes.
add.register('add', (Int32, Int32, Int32), cfunc-add)
Typo: cfunc-add is an expression, not an identifier.
No, it is a Python object that encompasses and describes a C function that adds two Int32 arrays and returns an Int32 array. It is essentially a Python wrapper of a C-function UFunc. It has been suggested that you should also be able to register Python expressions using the same interface.
An implementation of a (Int32, Float32, Float32) add is possible and desirable as mentioned earlier in the document. Which C module is going to declare such a combination?
PB> asstring(): create string from array
Not "tostring" like now?
This is proposed so as to be a little more consistent with Core Python which uses 'from-' and 'as-' prefixes. But I'm don't have strong opinions either way.
PB> 4. ArrayView
PB> This class is similar to the Array class except that the reshape PB> and flat methods will raise exceptions, since non-contiguous PB> arrays cannot be reshaped or flattened using just pointer and PB> step-size information.
This was completely unclear to me until here. I must say I find this a strange way of handling things. I haven't looked into implementation details, but wouldn't it feel more natural if an Array would just be the "data", and an ArrayView would contain the dimensions and strides. Completely separated. One would always need a pair, but more than one ArrayView could use the same Array.
In my definition, an Array that has no knowledge of its shape and type is not an Array, it's a data or character buffer. An array in my definition is a data buffer with information on how that buffer is to be mapped, i.e. shape, type, etc. An ArrayView is an Array that shares its data buffer with another Array, but may contain a different mapping of that Array, ie. its shape and type are different.
If this is what you mean, then the answer is "Yes". This is how we intend to implement Arrays and ArrayViews.
PB> a. _ufunc:
PB> 1. Does slicing syntax default to copy or view behavior?
Numeric 1 uses slicing for view, and a method for copy. "Feeling" compatible with core python would require copy on rhs, and view on lhs of an assignment. Is that distinction possible?
If copy is the default for slicing, how would one make a view?
B = A.V[:10] or A.view[:10] are some possibilities. B is now an ArrayView class.
PB> 2. Does item syntax default to copy or view behavior?
view.
PB> Yet, c[i] can be considered just a shorthand for c[i,:] which PB> would imply copy behavior assuming slicing syntax returns a copy.
If you reason that way, then c is just a shorthand for c[...] too.
Yes, that is correct, but that is not how Python currently behaves. The motivation for these questions is consistency with core Python behavior. The current Numeric does not follow this pattern for reasons of performance. If we assume performance is NOT an issue (ie. we can get similar performance by using various tricks), then what behavior is more intuitive for the average, and novice, user?
PB> 3. How is scalar coercion implemented?
PB> Python has fewer numeric types than Numeric which can cause PB> coercion problems. For example when multiplying a Python scalar PB> of type float and a Numeric array of type float, the Numeric array PB> is converted to a double, since the Python float type is actually PB> a double. This is often not the desired behavior, since the PB> Numeric array will be doubled in size which is likely to be PB> annoying, particularly for very large arrays.
Sure. That is handled reasonably well by the current Numeric 1.
To extend this, I'd like to comment that I have never really understood the philosophy of taking the largest type for coercion in all languages. Being a scientist, I have learned that when you multiply a very accurate number with a very approximate number, your result is going to be very approximate, not very accurate! It would thus be more logical to have Float32*Float64 return a Float32!
If numeric precision was all that mattered, then you would be correct. But numeric range is also important. I would hate to take the chance of overflowing the above multiplication because I stored the result as a Float32, instead of a Float64, even though the Float64 is overkill in terms of precision. FORTRAN has made an attempt to address this issue in FORTRAN 9X by allowing the user to indicate the range and precision of the calculation.
PB> In a future version of Python, the behavior of integer division PB> will change. The operands will be converted to floats, so the PB> result will be a float. If we implement the proposed scalar PB> coercion rules where arrays have precedence over Python scalars, PB> then dividing an array by an integer will return an integer array PB> and will not be consistent with a future version of Python which PB> would return an array of type double. Scientific programmers are PB> familiar with the distinction between integer and float-point PB> division, so should Numeric 2 continue with this behavior?
Numeric 2 should be as compatible as reasonably possible with core python. But my question is: how would we do integer division of arrays? A ufunc for which no operator shortcut exists?
I don't understand either question.
We have devised a scheme where there are two sets of coercion rules. One for coercion between array types, and one for array and Python scalar types. This latter set of rules can either have higher precedence for array types or Python scalar types. We favor array types having precedence.
A more complex set of coercion rules is also possible, if you prefer.
PB> 7. How are numerical errors handled (IEEE floating-point errors in PB> particular)?
I am developing my code on Linux and IRIX. I have seen that where Numeric code on Linux runs fine, the same code on IRIX may "core dump" on a FPE (e.g. arctan2(0,0)). That difference should be avoided.
PB> a. Print a message of the most severe error, leaving it to PB> the user to locate the errors.
What is the most severe error?
Well, divide by zero and overflow come to mind. Underflows are often considered less severe. Yet this is up to you to decide.
PB> c. Minimall UFunc class:
Typo: Minimal?
Got it!
Thanks for your comments.
I-obviously-have-a-lot-of-explaining-to-do-ly yours, Paul

"PB" == Paul Barrett Barrett@stsci.edu writes:
PB> By way of bootstrapping, only one predefined type need be known, PB> say, Int32. The operations associated with this type can only be PB> Int32 operations, because this is the only type it knows about. PB> Yet, we can add another type, say Real64, which has not only PB> Real64 operations, BUT also Int32 and Real64 mixed operations, PB> since it knows about Int32. The Real64 type provides the PB> necessary information to relate the Int32 and Int64 types. Let's PB> now add a third type, then a fourth, etc., each knowing about its PB> predecessor types but not its successors.
PB> This approach is identical to the way core Python adds new PB> classes or C-extension types, so this is nothing new. The PB> current types do not know about the new type, but the new type PB> knows about them. As long as one type knows the relationship PB> between the two that is sufficient for the scheme to work.
Yuck. I'm thinking how long it would take to load the Int256 class, because it will need to import all other types before defining the relations.... [see below for another idea]
PB> Attributes: .name: e.g. "Int32", "Float64", etc. .typecode: PB> e.g. 'i', 'f', etc. (for backward compatibility)
.typecode() is a method now.
PB> Yes, I propose that it become a settable attribute.
Then it is not backwards compatible anyway, and you could leave it out.
PB> .size (in bytes): e.g. 4, 8, etc.
"element size?"
PB> Yes.
I think it should be called like that in that case. I dnt lk abbrvs. size could be misread as the size of the total object.
add.register('add', (Int32, Int32, Int32), cfunc-add)
Typo: cfunc-add is an expression, not an identifier.
PB> No, it is a Python object that encompasses and describes a C PB> function that adds two Int32 arrays and returns an Int32 array.
I understand that, but in general a "-" in pseudo-code is the minus operator. I'd write cfunc_add instead.
An implementation of a (Int32, Float32, Float32) add is possible and desirable as mentioned earlier in the document. Which C module is going to declare such a combination?
Now that I re-think this: would it be possible for the type-loader to check for each type that it loads whether a cross-type module is available with a previously loaded type? That way all types can be independent. There would be a Int32 module knowing only Int32 types, and Float32 only knowing Float32 types. Then there would be a Int32Float32 type that handles cross-type functions. When Int32 or Float32 is loaded, the loader can see whether the other has been loaded earlier, and if it is, load the cross-definitions as well.
Only problem I can think of is functions linking 3 or more types.
PB> asstring(): create string from array
Not "tostring" like now?
PB> This is proposed so as to be a little more consistent with Core PB> Python which uses 'from-' and 'as-' prefixes. But I'm don't have PB> strong opinions either way.
PIL uses tostring as well. Anyway, I understand the buffer interface is a nicer way to communicate.
PB> 4. ArrayView
PB> This class is similar to the Array class except that the reshape PB> and flat methods will raise exceptions, since non-contiguous PB> arrays cannot be reshaped or flattened using just pointer and PB> step-size information.
This was completely unclear to me until here. I must say I find this a strange way of handling things. I haven't looked into implementation details, but wouldn't it feel more natural if an Array would just be the "data", and an ArrayView would contain the dimensions and strides. Completely separated. One would always need a pair, but more than one ArrayView could use the same Array.
PB> In my definition, an Array that has no knowledge of its shape and PB> type is not an Array, it's a data or character buffer. An array PB> in my definition is a data buffer with information on how that PB> buffer is to be mapped, i.e. shape, type, etc. An ArrayView is PB> an Array that shares its data buffer with another Array, but may PB> contain a different mapping of that Array, ie. its shape and type PB> are different.
PB> If this is what you mean, then the answer is "Yes". This is how PB> we intend to implement Arrays and ArrayViews.
No, it is not what I meant. Reading your answer I'd say that I wouldn't see the need for an Array. We only need a data buffer and an ArrayView. If there are two parts of the functionality, it is much cleaner to make the cut in an orthogonal way.
PB> B = A.V[:10] or A.view[:10] are some possibilities. B is now an PB> ArrayView class.
I hate magic attributes like this. I do not like abbrevs at all. It is not at all obvious what A.T or A.V mean.
PB> 2. Does item syntax default to copy or view behavior?
view.
PB> Yet, c[i] can be considered just a shorthand for c[i,:] which PB> would imply copy behavior assuming slicing syntax returns a copy.
If you reason that way, then c is just a shorthand for c[...] too.
PB> Yes, that is correct, but that is not how Python currently PB> behaves.
Current python also doesn't treat c[i] as a shorthand for c[i,:] or c[i,...]
Regards,
Rob Hooft

Rob W. W. Hooft writes:
"PB" == Paul Barrett Barrett@stsci.edu writes:
PB> By way of bootstrapping, only one predefined type need be known, PB> say, Int32. The operations associated with this type can only be PB> Int32 operations, because this is the only type it knows about. PB> Yet, we can add another type, say Real64, which has not only PB> Real64 operations, BUT also Int32 and Real64 mixed operations, PB> since it knows about Int32. The Real64 type provides the PB> necessary information to relate the Int32 and Int64 types. Let's PB> now add a third type, then a fourth, etc., each knowing about its PB> predecessor types but not its successors.
PB> This approach is identical to the way core Python adds new PB> classes or C-extension types, so this is nothing new. The PB> current types do not know about the new type, but the new type PB> knows about them. As long as one type knows the relationship PB> between the two that is sufficient for the scheme to work.
Yuck. I'm thinking how long it would take to load the Int256 class, because it will need to import all other types before defining the relations.... [see below for another idea]
First, I'm not proposing that we use this method of bootstapping from just one type. I was just demonstrating that it could be done. Users could then create their own types and dynamically add them to the module by the above scheme.
Second, I think your making the situation more complex than it really is. It doesn't take that long to initialize the type rules and register the functions, because both arrays are sparsely populated. If there isn't a rule between two types, you don't have to create a dictionary entry. The size of the coecion table is equal to or less than the number of types, so that's small. The function table is a sparsely populated square array. We just envision populating its diagonal elements and using coercion rules for the empty off-diagonal elements. The point is that if an off-diagonal element is filled, then it will be used.
I'll include our proposed implementation in the PEP for clarification.
PB> Attributes: .name: e.g. "Int32", "Float64", etc. .typecode: PB> e.g. 'i', 'f', etc. (for backward compatibility)
.typecode() is a method now.
PB> Yes, I propose that it become a settable attribute.
Then it is not backwards compatible anyway, and you could leave it out.
I'd like to, but others have strongly objected to leaving out typecodes.
PB> .size (in bytes): e.g. 4, 8, etc.
"element size?"
PB> Yes.
I think it should be called like that in that case. I dnt lk abbrvs. size could be misread as the size of the total object.
How about item_size?
add.register('add', (Int32, Int32, Int32), cfunc-add)
Typo: cfunc-add is an expression, not an identifier.
PB> No, it is a Python object that encompasses and describes a C PB> function that adds two Int32 arrays and returns an Int32 array.
I understand that, but in general a "-" in pseudo-code is the minus operator. I'd write cfunc_add instead.
Yes. I understand now.
PB> 4. ArrayView
PB> This class is similar to the Array class except that the reshape PB> and flat methods will raise exceptions, since non-contiguous PB> arrays cannot be reshaped or flattened using just pointer and PB> step-size information.
This was completely unclear to me until here. I must say I find this a strange way of handling things. I haven't looked into implementation details, but wouldn't it feel more natural if an Array would just be the "data", and an ArrayView would contain the dimensions and strides. Completely separated. One would always need a pair, but more than one ArrayView could use the same Array.
PB> In my definition, an Array that has no knowledge of its shape and PB> type is not an Array, it's a data or character buffer. An array PB> in my definition is a data buffer with information on how that PB> buffer is to be mapped, i.e. shape, type, etc. An ArrayView is PB> an Array that shares its data buffer with another Array, but may PB> contain a different mapping of that Array, ie. its shape and type PB> are different.
PB> If this is what you mean, then the answer is "Yes". This is how PB> we intend to implement Arrays and ArrayViews.
No, it is not what I meant. Reading your answer I'd say that I wouldn't see the need for an Array. We only need a data buffer and an ArrayView. If there are two parts of the functionality, it is much cleaner to make the cut in an orthogonal way.
I just don't see what you are getting at here! What attributes does your Array have, if it doesn't have a shape or type?
If Arrays only have view behavior; then Yes, there is no need for the ArrayView class. Whereas if Arrays have copy behavior, it might be a good idea to distinguish between an ordinary Array and a ArrayView. An alternative would be to have a view attribute.
PB> B = A.V[:10] or A.view[:10] are some possibilities. B is now an PB> ArrayView class.
I hate magic attributes like this. I do not like abbrevs at all. It is not at all obvious what A.T or A.V mean.
I'm not a fan of them either, but I'm looking for concensus on these issues.
PB> 2. Does item syntax default to copy or view behavior?
view.
PB> Yet, c[i] can be considered just a shorthand for c[i,:] which PB> would imply copy behavior assuming slicing syntax returns a copy.
If you reason that way, then c is just a shorthand for c[...] too.
PB> Yes, that is correct, but that is not how Python currently PB> behaves.
Current python also doesn't treat c[i] as a shorthand for c[i,:] or c[i,...]
Because there aren't any multi-dimensional lists in Python, only nested 1-dimensional lists. There is a structural difference.

"PB" == Paul Barrett Barrett@stsci.edu writes:
PB> .size (in bytes): e.g. 4, 8, etc.
"element size?"
PB> How about item_size?
OK.
No, it is not what I meant. Reading your answer I'd say that I wouldn't see the need for an Array. We only need a data buffer and an ArrayView. If there are two parts of the functionality, it is much cleaner to make the cut in an orthogonal way.
PB> I just don't see what you are getting at here! What attributes PB> does your Array have, if it doesn't have a shape or type?
A piece of memory. It needs nothing more. A buffer[1]. You'd always need an ArrayView. The Arrayview contains information like dimensions, strides, data type, endianness.
Making a new _view_ would consist of making a new ArrayView, and pointing its data object to the same data array.
Making a new _copy_ would consist of making a new ArrayView, and marking the "copy-on-write" features (however that needs to be implemented, I have never done that. Does it involve weak references?).
Different Views on the same data can even have different data types: e.g. character and byte, or even floating point and integer (I am a happy user of the fortran EQUIVALENCE statement that way too).
The speed up by re-use of temporary arrays becomes very easy this way too: one can even re-use a floating point data array as integer result if the reference count of both the data array and its (only) view is one.
[1] Could the python buffer interface be used as a pre-existing implementation here? Would that make it possible to implement Array.append()? I don't always know beforehand how large my numeric arrays will become.
Rob

Rob W. W. Hooft writes:
No, it is not what I meant. Reading your answer I'd say that I wouldn't see the need for an Array. We only need a data buffer and an ArrayView. If there are two parts of the functionality, it is much cleaner to make the cut in an orthogonal way.
PB> I just don't see what you are getting at here! What attributes PB> does your Array have, if it doesn't have a shape or type?
A piece of memory. It needs nothing more. A buffer[1]. You'd always need an ArrayView. The Arrayview contains information like dimensions, strides, data type, endianness.
Making a new _view_ would consist of making a new ArrayView, and pointing its data object to the same data array.
Making a new _copy_ would consist of making a new ArrayView, and marking the "copy-on-write" features (however that needs to be implemented, I have never done that. Does it involve weak references?).
Different Views on the same data can even have different data types: e.g. character and byte, or even floating point and integer (I am a happy user of the fortran EQUIVALENCE statement that way too).
I think our approaches are very similar. It's the meaning that we ascribe to Array and ArrayView that appears to be causing the confusion. Your Array object is our Data object and your ArrayView object is our Array attributes, ie. the information to map/interpret the Data object. We view an Array as being composed of two entities, its attributes and a Data object. And we entirely agree with the above definitions of _view_ and _copy_. But you haven't told us what object associates your Array and ArrayView to make a usable array that can be sliced, diced, and Julian fried.
My impression of your slice method would be:
slice(Array, ArrayView, slice expression)
I'm not too keen on this approach. :-)
The speed up by re-use of temporary arrays becomes very easy this way too: one can even re-use a floating point data array as integer result if the reference count of both the data array and its (only) view is one.
Yes! This is our intended implementation. But instead of re-using your Array object, we will be re-using a (data-) buffer object, or a memory-mapped object, or whatever else in which the data is stored.
[1] Could the python buffer interface be used as a pre-existing implementation here? Would that make it possible to implement Array.append()? I don't always know beforehand how large my numeric arrays will become.
In a way, yes. I've considered creating an in-memory object that has similar properties to the memory-mapped object (e.g. it might have a read-only property), so that the two data objects can be used interchangeably. The in-memory object would replace the string object as a data store, since the string object is meant to be read-only.
-- Paul

"PB" == Paul Barrett Barrett@stsci.edu writes:
PB> we entirely agree with the above definitions of _view_ and PB> _copy_. But you haven't told us what object associates your PB> Array and ArrayView to make a usable array that can be sliced, PB> diced, and Julian fried.
Hm. You know, I am not so deep into the python internals. I am a fairly high-level programmer. Not a CS type, but a chemist.... There might be much of implementation detail that escapes me. But I'm just trying to keep things beautiful (as in Erich Gamma et.al.)
I thought an ArrayView would have a pointer to the data array. Like the either like the .data attribute in the Numeric 1 API, or as a python object pointer.
PB> My impression of your slice method would be:
PB> slice(Array, ArrayView, slice expression)
If ArrayView.HasA(Array), that would not be required.
Regards,
Rob Hooft.

Being a scientist, I have learned that when you multiply a very accurate number with a very approximate number, your result is going to be very approximate, not very accurate! It would thus be more logical to have Float32*Float64 return a Float32!
Accuracy is not the right concept, but storage capacity. A Float64 can store any value that can be stored in a Float32, but the inverse is not true. Accuracy is not a property of a number, but of a value and its representation in the computer. The float value "1." can be perfectly accurate, even in 32 bits, or it can be an approximation for 1.-1.e-50, which cannot be represented precisely.
BTW, Float64 also has a larger range of magnitudes than Float32, not just more significant digits.
Numeric 2 should be as compatible as reasonably possible with core python. But my question is: how would we do integer division of arrays? A ufunc for which no operator shortcut exists?
Sounds fine. On the other hand, if and when Python's integer division behaviour is changed, there will be some new syntax for integer division, which should then also work on arrays.
Konrad.

Design and Implementation
Some parts of this look a bit imprecise and I don't claim to understand them. For example:
Its relation to the other types is defined when the C-extension module for that type is imported. The corresponding Python code is: > Int32.astype[Real64] = Real64 This says that the Real64 array-type has higher priority than the Int32 array-type.
I'd choose a clearer name than "astype" for this, but that's a minor detail. More important is how this is supposed to work. Suppose that in Int32 you say that Real64 has higher priority, and in Real64 you say that Int32 has higher priority. Would this raise an exception, and if so, when?
Perhaps the coercion question should be treated in a separate PEP that also covers standard Python types and provides a mechanism that any type implementer can use. I could think of a number of cases where I have wished I could define coercions between my own and some other types properly.
3. Array: This class contains information about the array, such as shape, type, endian-ness of the data, etc.. Its operators, '+', '-',
What about the data itself?
4. ArrayView This class is similar to the Array class except that the reshape and flat methods will raise exceptions, since non-contiguous
There are no reshape and flat methods in this proposal...
1. Does slicing syntax default to copy or view behavior? The default behavior of Python is to return a copy of a sub-list or tuple when slicing syntax is used, whereas Numeric 1 returns a view into the array. The choice made for Numeric 1 is apparently for reasons of performance: the developers wish to avoid the
Yes, performance was the main reason. But there is another one: if slicing returns a view, you can make a copy based on it, but if slicing returns a copy, there's no way to make a view. So if you change this, you must provide some other way to generate a view, and please keep the syntax simple (there are many practical cases where a view is required).
In this case the performance penalty associated with copy behavior can be minimized by implementing copy-on-write. This scheme has
Indeed, that's what most APL implementations do.
data buffer is made. View behavior would then be implemented by an ArrayView class, whose behavior be similar to Numeric 1 arrays,
So users would have to write something like
ArrayView(array, indices)
That looks a bit cumbersome, and any straightforward way to write the indices is illegal according to the current syntax rules.
2. Does item syntax default to copy or view behavior?
If compatibility with lists is a criterion at all, then I'd apply it consistently and use view semantics. Otherwise let's forget about lists and discuss 1. and 2. from a purely array-oriented point of view. And then I'd argue that view semantics is more frequent and should thus be the default for both slicing and item extraction.
3. How is scalar coercion implemented?
The old discussion again...
annoying, particularly for very large arrays. We prefer that the array type trumps the python type for the same type class, namely
That is a completely arbitrary rule from any but the "large array performance" point of view. And it's against the Principle of Least Surprise.
Now that we have the PEP procedure for proposing any change whatsoever, why not lobby for the addition of a float scalar type to Python, with its own syntax for constants? That looks like the best solution from everybody's point of view.
4. How is integer division handled? In a future version of Python, the behavior of integer division will change. The operands will be converted to floats, so the
Has that been decided already?
7. How are numerical errors handled (IEEE floating-point errors in particular)? It is not clear to the proposers (Paul Barrett and Travis Oliphant) what is the best or preferred way of handling errors. Since most of the C functions that do the operation, iterate over the inner-most (last) dimension of the array. This dimension could contain a thousand or more items having one or more errors of differing type, such as divide-by-zero, underflow, and overflow. Additionally, keeping track of these errors may come at the expense of performance. Therefore, we suggest several options:
I'd like to add another one:
e. Keep some statistics about the errors that occur during the operation, and if at the end the error count is > 0, raise an exception containing as much useful information as possible.
I would certainly not want any Python program to *print* anything unless I have explicitly told it to do so.
Konrad.

Is there any plan to support sparse matrices in NumPy?
Peter
participants (4)
-
Daehyok Shin
-
Konrad Hinsen
-
Paul Barrett
-
rob@hooft.net