# [Python-bugs-list] floating point ??? (PR#337)

tim_one@email.msn.com tim_one@email.msn.com
Tue, 23 May 2000 23:27:38 -0400 (EDT)

```[slupi@iol.it]
> Full_Name: Sergio Lupi
> Version: 1.52
> OS: Linux Suse 6.4
> Submission from: (NULL) (195.78.220.33)
>
>
> very simple:

Actually not -- it's very complicated, and while it may be surprising it
isn't a bug.

> try this:
>
> >>> 0.14-0.13 == 0.13-0.12
> 1
> so far so good, but
>
> >>> 0.13-0.12 == 0.12-0.11
> 0
>
> There are some combinations of numbers (they are many indeed)
> that mess things up.

And not just in Python:  you'll get very similar results whether you try
this in Python, Perl, C, C++, Java, Fortran, or anything else that uses
binary floating-point arithmetic.  None of those numbers (.11, .12, .13,
.14) is exactly representable in binary fp, much as 1/3 isn't exactly
representable as a decimal expansion either:  0.3333.....  So, for example,
.14-.13 is NOT .01 -- it's a binary fraction that's simply close to .01.
Ditto for .12-.11:  it's some other binary fraction that's also close to
.01.  If you print your results to more precision, you'll see a better
approximation to the value the computer truly computes:

>>> print "%.17g" % (.12-.11)
0.009999999999999995
>>> print "%.17g" % (.13-.12)
0.010000000000000009
>>>

And, indeed, those clearly aren't equal.

> This has been tested on several PCs under Linux and NT

Yes, that's expected.  It doesn't really matter whose hardware, whose
operating system, or even whose programming language you're using:  this is
what binary floating-point hardware *does*.  If you need exact decimal
arithmetic, you can't use binary floating-point.  There are classes
available for Python that support (at least the illusion of) decimal
arithmetic instead, but they're very much slower.

```