Hi all,
In my continuing adventures in the Land of Custom Dtypes I've come across some rather disappointing behaviour in 1.7 & 1.8.
I've defined my own class `Time360`, and a corresponding dtype `time360` which references Time360 as its scalar type.
Now with 1.6.2 I can do:
t = Time360(2013, 6, 29) np.array([t]).dtype
dtype('Time360')
And since all the instances supplied to the function were instances of the scalar type for my dtype, numpy automatically created an array using my dtype. Happy days!
But in 1.7 and 1.8 I get:
np.array([t]).dtype
dtype('O')
So now I just get a plain old object array. Boo! Hiss!
Is this expected? Desirable? An unwanted regression?
Richard
On Fri, Jun 21, 2013 at 7:09 AM, Richard Hattersley rhattersley@gmail.comwrote:
Hi all,
In my continuing adventures in the Land of Custom Dtypes I've come across some rather disappointing behaviour in 1.7 & 1.8.
I've defined my own class `Time360`, and a corresponding dtype `time360` which references Time360 as its scalar type.
Now with 1.6.2 I can do:
t = Time360(2013, 6, 29) np.array([t]).dtype
dtype('Time360')
And since all the instances supplied to the function were instances of the scalar type for my dtype, numpy automatically created an array using my dtype. Happy days!
But in 1.7 and 1.8 I get:
np.array([t]).dtype
dtype('O')
So now I just get a plain old object array. Boo! Hiss!
Is this expected? Desirable? An unwanted regression?
Bit short on detail here ;) How did you create/register the dtype?
Chuck
On 21 June 2013 14:49, Charles R Harris charlesr.harris@gmail.com wrote:
Bit short on detail here ;) How did you create/register the dtype?
The dtype is created/registered during module initialisation with: dtype = PyObject_New(PyArray_Descr, &PyArrayDescr_Type); dtype>typeobj = &Time360Type; ... PyArray_RegisterDataType(dtype)
Where Time360Type is my new type definition: static PyTypeObject Time360Type = { ... } which is initialised prior to the dtype creation.
If the detail matters then should I assume this is unexpected behaviour and maybe I can fix my code so it works?
Richard
On Fri, Jun 21, 2013 at 9:18 AM, Richard Hattersley rhattersley@gmail.comwrote:
On 21 June 2013 14:49, Charles R Harris charlesr.harris@gmail.com wrote:
Bit short on detail here ;) How did you create/register the dtype?
The dtype is created/registered during module initialisation with: dtype = PyObject_New(PyArray_Descr, &PyArrayDescr_Type); dtype>typeobj = &Time360Type; ... PyArray_RegisterDataType(dtype)
Where Time360Type is my new type definition: static PyTypeObject Time360Type = { ... } which is initialised prior to the dtype creation.
If the detail matters then should I assume this is unexpected behaviour and maybe I can fix my code so it works?
Hmm, that part looks ok.
Chuck
On Fri, Jun 21, 2013 at 12:53 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Fri, Jun 21, 2013 at 9:18 AM, Richard Hattersley <rhattersley@gmail.com
wrote:
On 21 June 2013 14:49, Charles R Harris charlesr.harris@gmail.com wrote:
Bit short on detail here ;) How did you create/register the dtype?
The dtype is created/registered during module initialisation with: dtype = PyObject_New(PyArray_Descr, &PyArrayDescr_Type); dtype>typeobj = &Time360Type; ... PyArray_RegisterDataType(dtype)
Where Time360Type is my new type definition: static PyTypeObject Time360Type = { ... } which is initialised prior to the dtype creation.
If the detail matters then should I assume this is unexpected behaviour and maybe I can fix my code so it works?
Hmm, that part looks ok.
You could check the numpy/core/src/umath/test_rational.c.src code to see if you are missing something.
Chuck
On 21 June 2013 19:57, Charles R Harris charlesr.harris@gmail.com wrote:
You could check the numpy/core/src/umath/test_rational.c.src code to see if you are missing something.
My code is based in large part on exactly those examples (I don't think I could have got this far using the documentation alone!), but I've rechecked and there's nothing obvious missing.
That said I think there may be something funny going on with error handling within getitem and friends so I'm still following up on that.
Richard
On 21 June 2013 19:57, Charles R Harris charlesr.harris@gmail.com wrote:
You could check the numpy/core/src/umath/test_rational.c.src code to see if you are missing something.
In v1.7+ the difference in behaviour between my code and the rational test case is because my scalar type doesn't subclass np.generic (aka. PyGenericArrType_Type).
In v1.6 this requirement doesn't exist ... mostly ... In other words, it works as long as the supplied scalars are contained within a sequence. So: np.array([scalar]) => np.array([scalar], dtype=my_dtype) But: np.array(scalar) => np.array(scalar, dtype=object)
For one of my scalar/dtype combos I can easily workaround the 1.7+ issue by just adding the subclass relationship. But another of my dtypes is wrapping a thirdparty type so I can't modify the subclass relationship. :(
So I guess I have three questions.
Firstly, is there some cunning workaround when defining a dtype for a thirdparty type?
Secondly, is the subclassgeneric requirement in v1.7+ desirable and/or intended? Or just an accidental regression?
And thirdly, assuming it's desirable to remove the subclassgeneric requirement, would it also make sense to make it work for scalars which are not within a sequence?
NB. If we decide there's some work which needs doing here, then I should be able to put time on it.
Thanks, Richard
On Fri, Jun 28, 2013 at 5:27 AM, Richard Hattersley rhattersley@gmail.comwrote:
On 21 June 2013 19:57, Charles R Harris charlesr.harris@gmail.com wrote:
You could check the numpy/core/src/umath/test_rational.c.src code to see
if
you are missing something.
In v1.7+ the difference in behaviour between my code and the rational test case is because my scalar type doesn't subclass np.generic (aka. PyGenericArrType_Type).
In v1.6 this requirement doesn't exist ... mostly ... In other words, it works as long as the supplied scalars are contained within a sequence. So: np.array([scalar]) => np.array([scalar], dtype=my_dtype) But: np.array(scalar) => np.array(scalar, dtype=object)
Thanks for tracking that down.
For one of my scalar/dtype combos I can easily workaround the 1.7+ issue by just adding the subclass relationship. But another of my dtypes is wrapping a thirdparty type so I can't modify the subclass relationship. :(
So I guess I have three questions.
Firstly, is there some cunning workaround when defining a dtype for a thirdparty type?
Secondly, is the subclassgeneric requirement in v1.7+ desirable and/or intended? Or just an accidental regression?
I don't know ;) But we do try to keep backward compatibility so unless there is a good reason it would be a regression. In any case, we should look for a way to let the previous version work.
And thirdly, assuming it's desirable to remove the subclassgeneric requirement, would it also make sense to make it work for scalars which are not within a sequence?
NB. If we decide there's some work which needs doing here, then I should be able to put time on it.
Chuck
On Fri, Jun 28, 2013 at 5:27 AM, Richard Hattersley rhattersley@gmail.comwrote:
On 21 June 2013 19:57, Charles R Harris charlesr.harris@gmail.com wrote:
You could check the numpy/core/src/umath/test_rational.c.src code to see
if
you are missing something.
In v1.7+ the difference in behaviour between my code and the rational test case is because my scalar type doesn't subclass np.generic (aka. PyGenericArrType_Type).
In v1.6 this requirement doesn't exist ... mostly ... In other words, it works as long as the supplied scalars are contained within a sequence. So: np.array([scalar]) => np.array([scalar], dtype=my_dtype) But: np.array(scalar) => np.array(scalar, dtype=object)
So the scalar case (0 dimensional array) doesn't work right. Hmm, what happens when you index the first array? Does subclassing the generic type work in 1.6?
My impression is that subclassing the generic type should be required, but I don't see where it is documented :( Anyway, what is the problem with the third party code? Is there no chance that you can get hold of it to fix it?
For one of my scalar/dtype combos I can easily workaround the 1.7+ issue by just adding the subclass relationship. But another of my dtypes is wrapping a thirdparty type so I can't modify the subclass relationship. :(
So I guess I have three questions.
Firstly, is there some cunning workaround when defining a dtype for a thirdparty type?
Secondly, is the subclassgeneric requirement in v1.7+ desirable and/or intended? Or just an accidental regression?
And thirdly, assuming it's desirable to remove the subclassgeneric requirement, would it also make sense to make it work for scalars which are not within a sequence?
NB. If we decide there's some work which needs doing here, then I should be able to put time on it.
Chuck
On 28 June 2013 17:33, Charles R Harris charlesr.harris@gmail.com wrote:
On Fri, Jun 28, 2013 at 5:27 AM, Richard Hattersley rhattersley@gmail.com wrote:
So: np.array([scalar]) => np.array([scalar], dtype=my_dtype) But: np.array(scalar) => np.array(scalar, dtype=object)
So the scalar case (0 dimensional array) doesn't work right. Hmm, what happens when you index the first array? Does subclassing the generic type work in 1.6?
Indexing into the first array works fine. So something like `a[0]` calls my_dtype>f>getitem which creates a new scalar instance, and something like `a[:1]` creates a new view with the correct dtype.
My impression is that subclassing the generic type should be required, but I don't see where it is documented :(
Can you elaborate on why the generic type should be required? Do you think it might cause problems elsewhere? (FYI I've also tested with a patched version of v1.6.2 which fixes the typo which prevents the use of userdefined dtypes with ufuncs, and that functionality seems to work fine too.)
Anyway, what is the problem with the third party code? Is there no chance that you can get hold of it to fix it?
Unfortunately it's out of my control.
Regards, Richard
participants (2)

Charles R Harris

Richard Hattersley