[Tutor] Float precision untrustworthy~~~
jeffshannon at gmail.com
Wed Mar 30 20:05:07 CEST 2005
On Mon, 28 Mar 2005 22:13:10 -0500, Jacob S. <keridee at jayco.net> wrote:
> I've already deleted the recent thread--
> But sometimes I agree with he who said that you can't trust floats at all.
> The scientific theory suggests that if an output is not what it should be,
> then the hypothesis is untrue.
> In this case, the hypothesis is the fact that float division should always
> produce the same output as our decimal division used in common schools
> today. Since that is not always true, then floats are not trustworthy~~~
> frankly, the mere fact that floats are difficult to check with equality has
> bitten me more than anything I've met yet in python.
The scientific method is also quite well aware of the limits of
precision. *EVERY* tool that you use to make measurements has
precision limits. In most of those cases, the imprecision due to
measuring tools will overwhelmingly swamp the tiny bit of imprecision
involved with floating-point arithmetic.
It's also worth pointing out that most of the float imprecision isn't
anything inherent in the floats themselves -- it's due to converting
between binary and decimal representation. Just as a phrase that's
translated from English into Russian and then back to English again
can have its meaning shifted, translating between different numerical
bases can create error -- but it's usually *much* less error than the
natural language translations cause.
Really, all this talk about floating-point imprecision is *way*
overblown. It's important to be aware of it, yes, because in some
cases it can be relevant... but it's a *long* way from making floats
unusable or unreliable.
> >>> 64**(1/3) == 4
> >>> 64**-3 == 4
> >>> a = 1/3
> >>> a
Note that you'll have the same problems if you use a Decimal number
type, because there's also an imprecision with Decimals. The problem
is that you're expecting a digital variable with a limited discrete
set of possible values to be equivalent to a rational number -- but no
binary or decimal floating-point number can exactly represent 1/3. A
Decimal approximation would have a 3 as the final digit rather than a
1, but there *would* be a final digit, and *that* is why this can't
> Why not just implement decimal or some equivalent and
> get rid of hidden, hard to debug headaches?
Well, a Decimal type *has* been implemented... but it's just trading
one set of headaches for another. What your code is apparently
expecting is a Rational type, which has been discussed ad infinitum
(and probably implemented several times, though not (yet) accepted
into the standard library); Rationals have the problem, though, that
any given operation may take an unpredictable amount of time to
execute. Would you consider it an improvement if, instead of
wondering why you're not getting an equality, you were wondering
whether your machine had frozen?
There's always a trade-off. It's important to be aware of the
weaknesses of the tools that you use, but *every* tool has weaknesses,
and it doesn't make sense to discard a tool just because you've
learned what those weaknesses are.
More information about the Tutor