[Numpy-discussion] Unexpected float96 precision loss

Michael Gilbert michael.s.gilbert at gmail.com
Wed Sep 1 16:26:59 EDT 2010


Hi,

I've been using numpy's float96 class lately, and I've run into some
strange precision errors.  See example below:

  >>> import numpy
  >>> numpy.version.version
  '1.5.0'
  >>> sys.version
  '3.1.2 (release31-maint, Jul  8 2010, 01:16:48) \n[GCC 4.4.4]'
  >>> x = numpy.array( [0.01] , numpy.float32 )
  >>> y = numpy.array( [0.0001] , numpy.float32 )
  >>> x[0]*x[0] - y[0]
  0.0
  >>> x = numpy.array( [0.01] , numpy.float64 )
  >>> y = numpy.array( [0.0001] , numpy.float64 )
  >>> x[0]*x[0] - y[0]
  0.0
  >>> x = numpy.array( [0.01] , numpy.float96 )
  >>> y = numpy.array( [0.0001] , numpy.float96 )
  >>> x[0]*x[0] - y[0]
  -6.286572655403010329e-22

I would expect the float96 calculation to also produce 0.0 exactly as
found in the float32 and float64 examples.  Why isn't this the case?

Slightly off-topic: why was the float128 class dropped?

Thanks in advance for any thoughts/feedback,
Mike



More information about the NumPy-Discussion mailing list