Bug in floating-point addition: is anyone else seeing this?

Mark Dickinson dickinsm at gmail.com
Thu May 22 11:23:43 EDT 2008


On May 22, 5:09 am, Ross Ridge <rri... at caffeine.csclub.uwaterloo.ca>
wrote:
> Henrique Dante de Almeida  <hda... at gmail.com> wrote:
>
> > Finally (and the answer is obvious). 387 breaks the standards and
> >doesn't use IEEE double precision when requested to do so.
>
> Actually, the 80387 and the '87 FPU in all other IA-32 processors
> do use IEEE 745 double-precision arithmetic when requested to do so.
> The problem is that GCC doesn't request that it do so.  It's a long
> standing problem with GCC that will probably never be fixed.  You can
> work around this problem the way the Microsoft C/C++ compiler does
> by requesting that the FPU always use double-precision arithmetic.

Even this isn't a perfect solution, though: for one thing, you can
only
change the precision used for rounding, but not the exponent range,
which remains the same as for extended precision.  Which means you
still don't get strict IEEE 754 compliance when working with very
large or very small numbers.  In practice, I guess it's fairly
easy to avoid the extremes of the exponent range, so this seems like
a workable fix.

More seriously, it looks as though libm (and hence the Python
math module) might need the extended precision: on my machine
there's a line in /usr/include/fpu_control.h that says

#define _FPU_EXTENDED 0x300     /* libm requires double extended
precision.  */

Mark



More information about the Python-list mailing list