[Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1
robert.kern at gmail.com
Tue Apr 12 15:17:39 EDT 2011
On Tue, Apr 12, 2011 at 11:49, Mark Wiebe <mwwiebe at gmail.com> wrote:
> On Tue, Apr 12, 2011 at 9:30 AM, Robert Kern <robert.kern at gmail.com> wrote:
>> You're missing the key part of the rule that numpy uses: for
>> array*scalar cases, when both array and scalar are the same kind (both
>> floating point or both integers), then the array dtype always wins.
>> Only when they are different kinds do you try to negotiate a common
>> safe type between the scalar and the array.
> I'm afraid I'm not seeing the point you're driving at, can you provide some
> examples which tease apart these issues? Here's the same example but with
> different kinds, and to me it seems to have the same character as the case
> with float32/float64:
> array([ Inf NaNj, Inf NaNj], dtype=complex64)
> array([ 1.00000000e+60+0.j, 1.00000000e+60+0.j])
The point is that when you multiply an array by a scalar, and the
array-dtype is the same kind as the scalar-dtype, the output dtype is
the array-dtype. That's what gets you the behavior of the
float32-array staying the same when you multiply it with a Python
float(64). min_scalar_type should never be consulted in this case, so
you don't need to try to account for this case in its rules. This
cross-kind example is irrelevant to the point I'm trying to make.
For cross-kind operations, then you do need to find a common output
type that is safe for both array and scalar. However, please keep in
mind that for floating point types, keeping precision is more
important than range!
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion