
Perry Greenfield wrote:
I'd argue that most people would want to see the field array as the type that it is rather than another instance of a recarray. Most of the "fun" of doing this is to have something that can be manipulated just like the numeric array that it is (i.e., ufuncs work on it, etc.) I'd also argue that it is more pythonic in the sense that the field indexing is akin to indexing a list or dictionary. When you do that you get the type it contains, not another list or dictionary. If I were select several fields, then yes, I would expect to get another record array, but not if I select only 1. We can override this with another subclass, but I wonder if this isn't so common a use case as to make it automatic whenever the type of the field is a standard array type.
I can see this point. You do realize, however, that a recarray still has a type (and if that type is a fixed number type then the ufuncs work). A recarray is a subclass of the ndarray and so it acts like an ndarray in almost every respect. The only difference is that attribute access can be used to get at fields and the __new__ method is a bit different. So, the question is when should field selection (which can be done on all arrays) return the base-type or the sub-type. I hesitate to enforce returning the base-type on all ndarrays because it seems limiting. But it could easily be done simply by creating a base-type ndarray in the getfield method if the descriptor has no fields. I guess I'd like to see some real problems that emerge before changing the default behavior by inserting "special-case" code. If we do want to place the special-case code to return ndarray's, or chararrays, or more recarray's, I guess I would put it in the recarray subclass itself. -Travis