[Tutor] How did this decimal error pop up?

Hugo Arts hugo.yoshi at gmail.com
Wed Apr 17 00:10:57 CEST 2013


On Tue, Apr 16, 2013 at 7:43 PM, Jim Mooney <cybervigilante at gmail.com>wrote:

> I was doing a simple training prog to figure change, and wanted to avoid
> computer inaccuracy by using only two-decimal input and not using division
> or mod where it would cause error. Yet, on a simple subtraction I got a
> decimal error instead of a two decimal result, as per below. What gives?
>
>
Your assumption that using only two-decimal input and no division/modulo
would avoid computer inaccuracy is wrong. In fact, for some values floating
point numbers don't even have 1 decimal accuracy (all examples done in
python 2.7.3):

 >>> 0.1 + 0.2
0.30000000000000004

This is a limitation of the way floating point numbers are represented in
hardware (some python versions may actually print 0.3 here, but the number
stored in memory is *not* 0.3. It can't be). floating point accuracy isn't
as simple as "accurate to 5 decimal places, always." Some numbers simply
*can not* be represented as a floating point value. A simple example of
this is the number 0.1, it has no exact floating point representation. Note
that python appears to have no problem storing the value 0.1:

>>> a = 0.1
>>> a
0.1

But this is just because python actually rounds numbers when it is
displaying them. We can show that there are still errors by adding the
errors up to make them visible:

>>> i = 0
>>> for _ in range(1000):
i += 0.1

 >>> i
99.9999999999986

That should, by all the laws of math, be 100. But it isn't. Because the
computer can't store 0.1 in a float, so it stores the closest possible
number it can instead. The value stored in a above is not actually 0.1,
it's closer to:

0.1000000000000000055511151231257827021181583404541015625

The precise value kind of depends on what hardware you have, but it can
*never* be 0.1 exactly. The format that floating point numbers are stored
in simply cannot represent the value 0.1. The reason for this is kind of
complicated, it has to do with how the number is stored in hardware. A good
analogy for base 10 is the number 1/3. To write that number down in a
decimal notation you'd need an infinite amount of space: 0.3333333..... and
so on. In floating point, 0.1, along with a ton of other numbers, are the
same way: you'd need an infinite amount of space to store them.

To store decimals with perfect accuracy, you have 2 options. The most basic
is what is generally called "fixed point" arithmetic. You decide beforehand
on the amount of accuracy you would like to have, say 2 decimals. Then you
simply multiply all numbers by 10^decimals_of_accuracy (in this case, 100)
and store them in a regular int. So 10.43 would become 1043. Then when
you're displaying a number, just put the decimal point back in. This is
pretty simple, gives you perfect (though fixed) accuracy, but it's also
kind of tedious.

The other option, and probably the simplest to use, is python's built in
decimal module, which provides a special class that implements perfectly
accurate decimal calculations. Why don't we use it for all applications,
you might ask? Well, it is extremely slow when compared to floats, and
usually float accuracy is good enough for people that they don't care about
the inaccuracies. Only packages that absolutely need accuracy use decimal,
like accounting software.

The documentation for the decimal module is here:
http://docs.python.org/2/library/decimal.html
everything a computer scientist should know about floating point is here:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

HTH,
Hugo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/tutor/attachments/20130417/2ec71e9a/attachment.html>


More information about the Tutor mailing list