Decimal arithmetic, with example code

Bengt Richter bokr at oz.net
Thu Oct 3 03:08:38 EDT 2002


On Tue, 1 Oct 2002 23:22:16 -0400, "Tim Peters" <tim_one at email.msn.com> wrote:

>[Bengt Richter]
>> This is unproved, but maybe with the help of timbot et al we can prove
>> the range over which is is safe and useful. The test run doesn't prove
>> anything, but I thought it interesting. Just run it as you would
>> your version.
>
>Note that you have no control over the accuracy of operations like 10.00**i,
>or of % formats, because Python has no control over them -- they're
>delegated to the platform libm pow() and libc sprintf(), and the quality of
>those varies massively across platforms.  If you want to be safe, you can't
>leave such things to the competency of your platform library writers.
>Floating divmod is also a bitch, limited by the quality of the platform
>fmod().
I understand. OTOH, if you had an installer that automatically tested safety
using the available infrastructure, the large majority of platforms might be
able to get optimized speed, where the fallback would be a guaranteed
platform-independent implementation... Which gets you to the question of what
safety means. Probably it should mean always reproducing the exact results
of the fixed point at a given precision, within some range of operator input
values... Which gets to what the range should be. Range and precision could be
specified to the installer... Which gets to whether a given range and precision
will be adequate for a given application... which gets to whether the user
understands what is required and can communicate it... which gets to motivating
a platform-independent-limited-only-by-memory-you-decide-how-to-use-it implementation...
which (among other things) makes me think you're smart ;-)

>
>I'm unclear on why so many are defending binary fp to people who insist they
>don't want it.  It's not like binary fp is in danger of going away in our
>lifetimes -- this is like "defending" Perl against Python <wink>.
>
Personally, I'm not defending fp, I'm just reacting to FUD and superstition.
Using 4 digitsof fraction and calling it good has consequences too (don't
get me wrong, I know you know, but how many of the users will?) e.g.,

 >>> from fixedpoint import FixedPoint as F
 >>> def compound(yearlyRate, timesPerYear, years):
 ...     rate = 1 + yearlyRate/timesPerYear
 ...     acc = yearlyRate/yearlyRate # make one of proper type
 ...     for i in xrange(timesPerYear*years):
 ...         acc *= rate
 ...     return acc
 ...
 >>> rate4 = F('.06',4) # 6%/yr
 >>> rate50 = F('.06',50) # 6%/yr
 >>> ratef = .06
 >>> for r in [rate4,rate50,ratef]: print `r`
 ...
 FixedPoint('0.0600', 4)
 FixedPoint('0.06000000000000000000000000000000000000000000000000', 50)
 0.059999999999999998
 >>> for r in [rate4, rate50, ratef]:
 ...     print compound(r, 12, 30)
 ...
 6.0237
 6.02257521226321618405404680891614858781950546708965
 6.02257521226
 >>> for r in [rate4, rate50, ratef]:
 ...     print compound(r, 365, 30)
 ...
 8.7520
 6.04875261233655216027138099585778986587670630410143
 6.04875261234

If you want to borrow money at 6%/year, read the fine print ;-)

For monthly it's not so bad, but on a million bucks the difference might
pay for lunch by 2032 ;-)

 >>> print F((compound(rate4,12,30)-compound(rate50,12,30))*1000000)
 1124.79

...especially compounded daily ;-)

 >>> print F((compound(rate4,365,30)-compound(rate50,365,30))*1000000)
 2703247.39
 >>> print F((compound(ratef,365,30)-compound(rate50,365,30))*1000000)
 0.00

BTW, you probably noticed I wasn't really rounding in my evenround function --
I was just noodging the value so the subsequent round inside %.f looked like it
was doing decimal bankers round. I just played a hunch with the BIAS multiply.
Using FixedPoint as a final formatter for floating point should work better than
bias-kludging the ordinary round. It's probably about as good an estimate of the
nearest decimal of a given precision to a given fp number as you can get.

>>> for i in range(20): print i, F(.35,i)
...
0 0.
1 0.3
2 0.35
3 0.350
4 0.3500
5 0.35000
6 0.350000
7 0.3500000
8 0.35000000
9 0.350000000
10 0.3500000000
11 0.35000000000
12 0.350000000000
13 0.3500000000000
14 0.35000000000000
15 0.350000000000000
16 0.3500000000000000
17 0.34999999999999998
18 0.349999999999999978
19 0.3499999999999999778

BTW, FWIW, since the denominator is 1L << -e where you round the conversion,
the following should hold (I think, not very tested):

                exactHalf = 1L << (-e-1)
                n = top + exactHalf    # ok round except banker's tie with odd no
                wasTie = not (n & ((exactHalf<<2)-1L))  # odd => even and no 1's below
                n = (n >> -e) - wasTie # readjust for bad tie result

                assert n == _roundquotient(top, 1L << -e)

Whether it's worth doing depends on speeds which I haven't tested. Depends on divmod and cmp...
Out of steam for tonight...

BTW2, I wonder if it would be worth while to do a banker's round like round, but guaranteeing
that the result's magnitude will be the first fp state >= the true decimal value. It would
be biased minisculely, but it might be helpful for printing non-surprising decimals. str(f)
seems to be solving part of the problem, but an alternate %f (%F?) to control bankers round
at a given precision might be nice. A few bits from fixedpoint could be cannibalized.

Regards,
Bengt Richter



More information about the Python-list mailing list