Float precision and float equality

Carl Banks pavlovevidence at gmail.com
Thu Dec 10 17:23:06 EST 2009


On Dec 10, 10:46 am, dbd <d... at ieee.org> wrote:
> On Dec 7, 12:58 pm, Carl Banks <pavlovevide... at gmail.com> wrote:
>
> > On Dec 7, 10:53 am, dbd <d... at ieee.org> wrote:
> > > ...
>
> > You're talking about machine epsilon?  I think everyone else here is
> > talking about a number that is small relative to the expected smallest
> > scale of the calculation.
>
> > Carl Banks
>
> When you implement an algorithm supporting floats (per the OP's post),
> the expected scale of calculation is the range of floating point
> numbers. For floating point numbers the intrinsic truncation error is
> proportional to the value represented over the normalized range of the
> floating point representation. At absolute values smaller than the
> normalized range, the truncation has a fixed value. These are not
> necessarily 'machine' characteristics but the characteristics of the
> floating point format implemented.

I know, and it's irrelevant, because no one, I don't think, is talking
about magnitude-specific truncation value either, nor about any other
tomfoolery with the floating point's least significant bits.


> A useful description of floating point issues can be found:
[snip]

I'm not reading it because I believe I grasp the situation just fine.
But you are welcome to convince me otherwise.  Here's how:

Say I have two numbers, a and b.  They are expected to be in the range
(-1000,1000).  As far as I'm concerned, if they differ by less than
0.1, they might as well be equal.  Therefore my test for "equality"
is:

abs(a-b) < 0.08

Can you give me a case where this test fails?

If a and b are too far out of their expected range, all bets are off,
but feel free to consider arbitrary values of a and b for extra
credit.


Carl Banks



More information about the Python-list mailing list