[Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1
robert.kern at gmail.com
Tue Apr 12 11:24:56 EDT 2011
On Mon, Apr 11, 2011 at 23:43, Mark Wiebe <mwwiebe at gmail.com> wrote:
> On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant <oliphant at enthought.com>
>> It would be good to see a simple test case and understand why the boolean
>> multiplied by the scalar double is becoming a float16. In other words,
>> why does
>> return a float16 array
>> This does not sound right at all and it would be good to understand why
>> this occurs, now. How are you handling scalars multiplied by arrays in
> The reason it's float16 is that the first function in the multiply function
> list for which both types can be safely cast to the output type,
Except that float64 cannot be safely cast to float16.
> applying the min_scalar_type function to the scalars, is float16.
This is implemented incorrectly, then. It makes no sense for floats,
for which the limiting attribute is precision, not range. For floats,
the result of min_scalar_type should be the type of the object itself,
nothing else. E.g. min_scalar_type(x)==float64 if type(x) is float no
matter what value it has.
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion