2012/11/16 Olivier Delalleau <olivier.delalleau(a)gmail.com>
> 2012/11/16 Charles R Harris <charlesr.harris(a)gmail.com>
>
>>
>>
>> On Thu, Nov 15, 2012 at 11:37 PM, Charles R Harris <
>> charlesr.harris(a)gmail.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 15, 2012 at 8:24 PM, Gökhan Sever <gokhansever(a)gmail.com>wrote:
>>>
>>>> Hello,
>>>>
>>>> Could someone briefly explain why are these two operations are casting
>>>> my float32 arrays to float64?
>>>>
>>>> I1 (np.arange(5, dtype='float32')).dtype
>>>> O1 dtype('float32')
>>>>
>>>> I2 (100000*np.arange(5, dtype='float32')).dtype
>>>> O2 dtype('float64')
>>>>
>>>
>>> This one is depends on the size of the multiplier and is first present
>>> in 1.6.0. I suspect it is a side effect of making the type conversion code
>>> sensitive to magnitude.
>>>
>>>
>>>>
>>>>
>>>>
>>>> I3 (np.arange(5, dtype='float32')[0]).dtype
>>>> O3 dtype('float32')
>>>>
>>>> I4 (1*np.arange(5, dtype='float32')[0]).dtype
>>>> O4 dtype('float64')
>>>>
>>>
>>> This one probably depends on the fact that the element is a scalar, but
>>> doesn't look right. Scalars are promoted differently. Also holds in numpy
>>> 1.5.0 so is of old provenance.
>>>
>>>
>> This one has always bothered me:
>>
>> In [3]: (-1*arange(5, dtype=uint64)).dtype
>> Out[3]: dtype('float64')
>>
>
> My interpretation here is that since the possible results when multiplying
> an int64 with an uint64 can be signed, and can go beyond the range of
> int64, numpy prefers to cast everything to float64, which can represent
> (even if approximately) a larger range of signed values.
>
Actually, thinking about it a bit more, I suspect the logic is not related
to the result of the operation, but to the fact numpy needs to cast both
arguments into a common dtype before doing the operation, and it has no
integer dtype available that can hold both int64 and uint64 numbers, so it
uses float64 instead.
-=- Olivier