# Decimal arithmatic, was Re: Python GUI app to impress the boss?

Bengt Richter bokr at oz.net
Mon Sep 30 00:42:39 EDT 2002

```On Sun, 29 Sep 2002 17:33:57 -0500, "Chris Gonnerman" <chris.gonnerman at newcenturycomputers.net> wrote:

>----- Original Message -----
>From: "Bengt Richter" <bokr at oz.net>
>
>
>> On 29 Sep 2002 17:06:53 GMT, Christopher Browne <cbbrowne at acm.org> wrote:
>>
>> >The problem with FP is that FP involves the use of binary
>> >approximations, which commonly cannot exactly represent
>> >decimal values.
>> >
>> >Notably, the decimal fraction 0.1, which is /exactly/ 1/10,
>> >does not have any exact encoding in binary.  Its
>> >representation in binary is a repeating binary fraction.
>>
>> How is that a significantly different problem from repeating
>> decimal fractions? (E.g., 1/3. or 1/7. or wxyz/9999. which
>> will give you 0.wxyzwxzy..?)
>
>Humans think in decimal.  To a human, 1.0/10.0 == 0.1, not
>0.10000000000000001.  OK, so this *rounds* to 0.10 at two
>
>>>> .70 * .05
>0.034999999999999996
>
>Rounded to two decimals, that's 0.03; but done in proper
>decimal numbers, the answer should be 0.035, rounded to 0.04
>using either "classic" or "banker's" decimal rounding.
>
>If you are calculating sales tax, for instance, this is
>unacceptable.
>
>(I borrowed this example from memory from a previous poster,
>and if I could remember who it was I'd give credit...)
>
>You ask how it is a different problem.  Simple.  A human,
I hope that includes me ;-)
>training in decimal math in grade school, knows that
>
>    1/3 = 0.3333333...
>
>and would *expect* it to round to 0.33 (still in two decimal
>places).  The problem with *binary* floating point is the
>"surprise factor."
>
Old programmers aren't surprised by anything. They expect the unexpected ;-)

>So, Bengt, how do I do this right using only floats?  I can't.
>I have to have decimal arithmetic to get the right answer.
>
It goes back to requirements. E.g.,

>>> for x in xrange(100):
...     for y in xrange(100):
...         fx = float('.%02d' %x)
...         fy = float('.%02d' %y)
...         fxy = fx*fy
...         xy = x*y
...         sfxy = '%.2f' % (fxy*1.000000001)
...         sxy = '0.%02d' % ((xy+50)/100)
...         assert sfxy == sxy
...

finishes sans problem, as a quick check on a limited problem domain. Obviously
a proper test suite and some math would be required to show validity for a
realistic problem domain.

If you can specify exactly how your decimal virtual machine works,
(i.e., how many significant figures are carried, how it rounds/truncates
intermediate values etc., and how many figures are going to be in the inputs
and outputs, then we can determine how much of a problem it will be to do it
using floating point.

The example you cited is included in the above:
>> '%.2f' % ( .05 * .70)
0.03'
>> '%.2f' % (-.05 * .70)
-0.03'

changes to

>> '%.2f' % (( .05 * .70) * 1.000000001)
0.04'
>> '%.2f' % ((-.05 * .70) * 1.000000001)
-0.04'

There is not unlimited room in the double floating point space around the
true abstract real values corresponding to decimal representations, but there is
quite a bit, unless you are carrying a lot of figures. E.g., as an
approximate indication,

>>> '%.2f' % ((.0349999999    ) * 1.0000000001)
'0.03'
>>> '%.2f' % ((.03499999999   ) * 1.0000000001)
'0.03'
>>> '%.2f' % ((.034999999999  ) * 1.0000000001)
'0.04'

and the multiplier is probably bigger than it has to be (and better specified
in binary). But that can be optimized when you specify your decimal virtual machine.

If you need rounding that alternates direction on alternate exact trailing fives,
then you need an extra step.

If you visualize the true theoretical decimal values representable in your decimal virtual
machine as little red glowing dots on the real axis, and visualize as little glowing blue dots
all the exact theoretical numbers corresponding to all valid bit combinations in a floating
point double, then if every pair of successive red dots has thousands of blue dots in between,
and the error in floating point calculations can be held to within a few blue dots, then
you can see how to select the red dot that represents the true decimal answer, even if
there is no blue dot exactly coinciding with it. Twelve decimal digits would give about a thousand
blue dots between the red ones. And a thousand times that while still in the FPU, before
storing in memory. Rounding an exact decimal is moving from one red dot to another,
perhaps skipping hundreds or thousands of dots, depending on fractional decimals carried. But
we must round by moving from some blue dot to another blue dot. By shifting the blue
uncertainty set reliably to one side of its true red dot, we can move from blue dot to
blue dot where the new blue dot will reliably be in an uncertainty set corresponding
to the proper red dot. If more blue dots are needed, they can be arranged.
If you are doing very few operations, so that the range of blue dot uncertainty that can be
accumulated is small, then you can have more red dots.

Regards,
Bengt Richter

```