On Thu, Mar 06, 2014 at 04:17:55PM -0800, Mark H. Harris wrote:
I just had one more thought along this line; consider this:
from pdeclib import * s1=sqrt(2.01) s1 Decimal('1.41774468787578244511883132198668766452744') s2=sqrt(d(2.01)) s2 Decimal('1.41774468787578252029556185427085779261123')
s12 Decimal('2.00999999999999978683717927196994423866272') s22 Decimal('2.01000000000000000000000000000000000000000')
If you skip the conversion to Decimal, you actually get the right answer using floats:
py> (2.010.5)2 2.01
So the problem here isn't the binary float, but that Decimal by default has too much precision and consequently it ends up keeping digits that the user doesn't care about:
py> from decimal import Decimal as D py> (D.from_float(2.01)D("0.5"))2 Decimal('2.009999999999999786837179272')
Floats have about 14 significant base-10 figures of precision (more in base-2), so if we tell Decimal to use the same, we should get the same result:
py> import decimal py> ct = decimal.getcontext() py> ct.prec = 14 py> (D.from_float(2.01)D("0.5"))2 Decimal('2.0100000000000')
Decimal is not a panacea. Both Decimal and binary floats have the same limitations, they just occur in different places for different numbers. All floating point numbers have these same issues. Fixed point numbers have different issues, rationals have their own issues, and symbolic computations have a different set of issues.
Computer maths is a leaky abstraction. No matter what you do, how you implement it, the abstraction leaks. Not even Mathematica can entirely hide the fact that it is computing rather than performing a Platonic ideal of mathematics.