[Tutor] Python 2.5.4 - error in rounding

Steven D'Aprano steve at pearwood.info
Mon May 24 02:37:57 CEST 2010


On Mon, 24 May 2010 03:06:28 am Wayne Werner wrote:
> On Sat, May 22, 2010 at 9:58 AM, Steven D'Aprano 
<steve at pearwood.info>wrote:
> > On Sun, 23 May 2010 12:19:07 am Wayne Werner wrote:
> > > On Sat, May 22, 2010 at 7:32 AM, Steven D'Aprano
> >
> > <steve at pearwood.info>wrote:
> > > > Why do people keep recommending Decimal? Decimals suffer from
> > > > the exact same issues as floats,
> > >
> > > This is exactly incorrect! The Decimal operator offers /exact/
> > > decimal point operations.
> >
> > Decimal is only exact for fractions which can be represented by a
> > finite sum of powers-of-ten, like 0.1, just like floats can only
> > represent fractions exactly if they can be represented by a finite
> > sum of powers-of-two, like 0.5.
> >
> > Not only did I demonstrate an example of rounding error using
> > Decimal in my post, but you then repeated that rounding error and
> > then had the audacity to claim that it was "exact":
>
> Decimal doesn't round - exact precision, not exact accuracy.

Of course decimal rounds! Did you even bother to read the page on 
Decimal that you told me to read? It has a section called:

"Mitigating round-off error with increased precision"
http://docs.python.org/library/decimal.html#mitigating-round-off-error-with-increased-precision

Why would it have round-off error if it doesn't round?

Not only do Decimal calculations round, but it gives the user a choice 
of rounding modes (e.g. round down, round up, banker's rounding), and 
whether to trap rounding and treat it as an error. I demonstrated an 
example of this rounding, a calculation of Decimal(1)/Decimal(3) which 
did NOT produce the correct result exactly, but ROUNDED the result to 
28 decimal places.

Still don't believe me? Then explain this:

>>> x = Decimal('0.9999999999999999999999999999')
>>> x + Decimal(7)/Decimal(10**29)
Decimal('1.000000000000000000000000000')

The answer without rounding is:

0.99999999999999999999999999997

not one. And then there is this:

>>> decimal.getcontext().rounding = decimal.ROUND_DOWN
>>> x + Decimal(7)/Decimal(10**29) == x
True


IEEE-compliant floats also have a choice of rounding modes, but few 
high-level programming languages expose that functionality.



> Floating 
> point has neither reliable precision or accuracy, at least to certain
> extents. 

On IEEE-compliant systems (which include nearly any computer you're 
likely to work on) floating point has reliable precision. C singles are 
reliably 32 bits with 24 binary digits of precision C doubles (which 
Python uses) are reliably 64 bits with 53 binary digits of precision.

As for accuracy, any lack of accuracy (correctness) is a bug in the 
implementation, not a fundamental flaw in float. E.g. the infamous 
Pentium FDIV bug.


> Decimal, OTOH will perform exactly the same under the exact 
> same circumstances every time. 

And so will floats.

Decimals and floats are both constructed exactly the same way:

number = significant digits * base ** exponent

The only difference is that Decimal uses digits 0...9 for the digits and 
ten for the base, while floats use 0,1 for the digits and two for the 
base. This makes Decimal very useful because we humans like to work 
with base-ten numbers like 0.1 and get upset that they can't be 
expressed exactly in base-two. But Decimal is subject to the exact same 
issues as float, because at a fundamental level they are constructed 
the same way with the same limitations. The difference in base merely 
affects *which* numbers can't be expressed exactly without rounding, 
not the existence of such numbers.

Because we tend to *think* in base 10, we naturally get annoyed that 
while binary floats can express 0.099999999999999992 exactly, and 
0.10000000000000001 exactly, they miss out on 0.1 (as well as an 
infinite number of other rationals). But decimal floats suffer from the 
same issue. Between Decimal('0.0999...9') and Decimal('0.1') there are 
an infinite number of rationals that can't be expressed exactly, and if 
a calculation *should* produce one of those rationals, it will be 
rounded to an appropriate 




> No matter how many points of precision 
> you go out to, .3333 * 3 can -never- be equal to 1 (except for very
> large values of 3).

Really? Would you care to put money on that?

>>> decimal.getcontext().prec = 3
>>> Decimal('0.3333')
Decimal('0.3333')
>>> Decimal('0.3333') * 3
Decimal('1.00')



> 1/3 is a different number than .33333 repeating. 

Nonsense. 0.333 repeating is *exactly* one third. Ask a mathematician. 
Here is a short proof:

  x = 0.33333...  # goes on forever
10x = 3.33333...  # still goes on forever

subtract the first from the second:

 9x = 3.00000... = 3 exactly

so x = 3/9 = 1/3 exactly.


> It's close, getting closer the further out you go, and once it
> reaches infinity then sure, it's equivalent. 

You can't "reach" infinity. That's why it is *repeating* -- it never 
stops. This demonstrates that you can't write 1/3 in decimal exactly, 
you *must* round the number to a finite number of places. You can't 
write 1/3 exactly in binary either, but you can in base-three: "0.1".



> But unfortunately 
> computers are finite state machines and therefore are not capable of
> expressing the rational number 1/3 in its decimal equivalent.

Right. Which means any calculation that *should* produce 1/3, like 
Decimal(1)/Decimal(3), *must* be rounded.


> This has nothing to do with the Decimal module which will always
> perform reliably - you can count on Decimals to behave, precisely,
> but floats not so much

You are confused. float(1)/float(3) will always produce the same result, 
rounded the same way, just as Decimal(1)/Decimal(3) will.

[...]
> If you want accurate representation of rational numbers, then of
> course like you suggested the Fraction module is available.

Which is what I said, and you told me to stop spreading myths about 
Decimal.



-- 
Steven D'Aprano


More information about the Tutor mailing list