[Numpy-discussion] Unexpected float96 precision loss

Pauli Virtanen pav at iki.fi
Wed Sep 1 17:15:22 EDT 2010


Wed, 01 Sep 2010 16:26:59 -0400, Michael Gilbert wrote:
> I've been using numpy's float96 class lately, and I've run into some
> strange precision errors.
[clip]
>   >>> x = numpy.array( [0.01] , numpy.float96 )
[clip]
> I would expect the float96 calculation to also produce 0.0 exactly as
> found in the float32 and float64 examples.  Why isn't this the case?

(i) It is not possible to write long double literals in Python.
    "float96(0.0001)" means in fact "float96(float64(0.0001))"

(ii) It is not possible to represent numbers 10^-r, r > 1 exactly
     in base-2 floating point.

So if you write "float96(0.0001)", the result is not the float96 number 
closest to 0.0001, but the 96-bit representation of the 64-bit number 
closest to 0.0001. Indeed,

>>> float96(0.0001), float96(1.0)/1000
(0.00010000000000000000479, 0.00099999999999999999996)

-- 
Pauli Virtanen




More information about the NumPy-Discussion mailing list