
This was originally posted on SO (https://stackoverflow.com/questions/28853740/numpy-array-casting-ruled-not-s...) and it was suggested it is probably a bug in numpy.take. Python 2.7.8 |Anaconda 2.1.0 (32-bit)| (default, Jul 2 2014, 15:13:35) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information.
import numpy numpy.__version__ '1.9.2'
a = numpy.array([9, 7, 5, 4, 3, 1], dtype=numpy.uint32) b = numpy.array([1, 3], dtype=numpy.uint32) c = a.take(b)
Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> c = a.take(b) TypeError: Cannot cast array data from dtype('uint32') to dtype('int32') according to the rule 'safe'

On Sat, Mar 7, 2015 at 2:02 PM, Dinesh Vadhia <dineshbvadhia@hotmail.com> wrote:
This was originally posted on SO ( https://stackoverflow.com/questions/28853740/numpy-array-casting-ruled-not-s...) and it was suggested it is probably a bug in numpy.take.
Python 2.7.8 |Anaconda 2.1.0 (32-bit)| (default, Jul 2 2014, 15:13:35) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information.
import numpy numpy.__version__ '1.9.2'
a = numpy.array([9, 7, 5, 4, 3, 1], dtype=numpy.uint32) b = numpy.array([1, 3], dtype=numpy.uint32) c = a.take(b)
Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> c = a.take(b) TypeError: Cannot cast array data from dtype('uint32') to dtype('int32') according to the rule 'safe'
This actually looks correct for 32-bit windows. Numpy indexes with a signed type big enough to hold a pointer to void, which in this case is an int32, and the uint32 cannot be safely cast to that type. Chuck

On Sat, Mar 7, 2015 at 2:45 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Sat, Mar 7, 2015 at 2:02 PM, Dinesh Vadhia <dineshbvadhia@hotmail.com> wrote:
This was originally posted on SO ( https://stackoverflow.com/questions/28853740/numpy-array-casting-ruled-not-s...) and it was suggested it is probably a bug in numpy.take.
Python 2.7.8 |Anaconda 2.1.0 (32-bit)| (default, Jul 2 2014, 15:13:35) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information.
import numpy numpy.__version__ '1.9.2'
a = numpy.array([9, 7, 5, 4, 3, 1], dtype=numpy.uint32) b = numpy.array([1, 3], dtype=numpy.uint32) c = a.take(b)
Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> c = a.take(b) TypeError: Cannot cast array data from dtype('uint32') to dtype('int32') according to the rule 'safe'
This actually looks correct for 32-bit windows. Numpy indexes with a signed type big enough to hold a pointer to void, which in this case is an int32, and the uint32 cannot be safely cast to that type.
Chuck
I note that on SO Jaime made the suggestion that take use unsafe casting and throw an error on out of bounds indexes. That sounds reasonable, although for sufficiently large integer types an index could wrap around to a good value. Maybe make it work only for npy_uintp. Chuck

On Sat, Mar 7, 2015 at 1:52 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Sat, Mar 7, 2015 at 2:45 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Sat, Mar 7, 2015 at 2:02 PM, Dinesh Vadhia <dineshbvadhia@hotmail.com> wrote:
This was originally posted on SO ( https://stackoverflow.com/questions/28853740/numpy-array-casting-ruled-not-s...) and it was suggested it is probably a bug in numpy.take.
Python 2.7.8 |Anaconda 2.1.0 (32-bit)| (default, Jul 2 2014, 15:13:35) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information.
import numpy numpy.__version__ '1.9.2'
a = numpy.array([9, 7, 5, 4, 3, 1], dtype=numpy.uint32) b = numpy.array([1, 3], dtype=numpy.uint32) c = a.take(b)
Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> c = a.take(b) TypeError: Cannot cast array data from dtype('uint32') to dtype('int32') according to the rule 'safe'
This actually looks correct for 32-bit windows. Numpy indexes with a signed type big enough to hold a pointer to void, which in this case is an int32, and the uint32 cannot be safely cast to that type.
Chuck
I note that on SO Jaime made the suggestion that take use unsafe casting and throw an error on out of bounds indexes. That sounds reasonable, although for sufficiently large integer types an index could wrap around to a good value. Maybe make it work only for npy_uintp.
Chuck
It is mostly about consistency, and having take match what indexing already does, which is to unsafely cast all integers: In [11]: np.arange(10)[np.uint64(2**64-1)] Out[11]: 9 I think no one has ever complained about that obviously wrong behavior, but people do get annoyed if they cannot use their perfectly valid uint64 array because we want to protect them from themselves. Sebastian has probably given this more thought than anyone else, it would be interesting to hear his thoughts on this. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.

On Sa, 2015-03-07 at 18:21 -0800, Jaime Fernández del Río wrote: <snip>
I note that on SO Jaime made the suggestion that take use unsafe casting and throw an error on out of bounds indexes. That sounds reasonable, although for sufficiently large integer types an index could wrap around to a good value. Maybe make it work only for npy_uintp.
Chuck
It is mostly about consistency, and having take match what indexing already does, which is to unsafely cast all integers:
In [11]: np.arange(10)[np.uint64(2**64-1)] Out[11]: 9
I think no one has ever complained about that obviously wrong behavior, but people do get annoyed if they cannot use their perfectly valid uint64 array because we want to protect them from themselves. Sebastian has probably given this more thought than anyone else, it would be interesting to hear his thoughts on this.
Not really, there was no change in behaviour for arrays here. Apparently though (which I did not realize), there was a change for numpy scalars/0-d arrays. Of course I think ideally "same_type" casting would raise an error or at least warn on out of bounds integers, but we do not have a mechanism for that. We could fix this, I think Jaime you had thought about that at some point? But it would require loop specializations for every integer type. So, I am not sure what to prefer, but for the user indexing with unsigned integers has to keep working without explicit cast. Of course the fact that it is dangerous, is bothering me a bit, even if a dangerous wrap-around seems unlikely in practice. - Sebastian
Jaime
-- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial. _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

On 08.03.2015 11:49, Sebastian Berg wrote:
On Sa, 2015-03-07 at 18:21 -0800, Jaime Fernández del Río wrote: <snip>
I note that on SO Jaime made the suggestion that take use unsafe casting and throw an error on out of bounds indexes. That sounds reasonable, although for sufficiently large integer types an index could wrap around to a good value. Maybe make it work only for npy_uintp.
Chuck
It is mostly about consistency, and having take match what indexing already does, which is to unsafely cast all integers:
In [11]: np.arange(10)[np.uint64(2**64-1)] Out[11]: 9
I think no one has ever complained about that obviously wrong behavior, but people do get annoyed if they cannot use their perfectly valid uint64 array because we want to protect them from themselves. Sebastian has probably given this more thought than anyone else, it would be interesting to hear his thoughts on this.
Not really, there was no change in behaviour for arrays here. Apparently though (which I did not realize), there was a change for numpy scalars/0-d arrays. Of course I think ideally "same_type" casting would raise an error or at least warn on out of bounds integers, but we do not have a mechanism for that.
We could fix this, I think Jaime you had thought about that at some point? But it would require loop specializations for every integer type.
So, I am not sure what to prefer, but for the user indexing with unsigned integers has to keep working without explicit cast. Of course the fact that it is dangerous, is bothering me a bit, even if a dangerous wrap-around seems unlikely in practice.
I was working on supporting arbitrary integer types as index without casting. This would have a few advantages, you can save memory without sacrificing indexing performance by using smaller integers and you can skip the negative index wraparound step for unsigned types. But it does add quite a bit of code bloat that is essentially a micro-optimization. To make it really useful one also needs to adapt other functions like where, arange, meshgrid, indices etc. to have an option to return the smallest integer type that is sufficient for the index array.
participants (5)
-
Charles R Harris
-
Dinesh Vadhia
-
Jaime Fernández del Río
-
Julian Taylor
-
Sebastian Berg