[Python-Dev] Mixing float and Decimal -- thread reboot
raymond.hettinger at gmail.com
Mon Mar 22 21:53:25 CET 2010
For the record, I thought I would take a stab at making a single post
that recaps the trade-offs and reasoning behind the decision
to have Fraction + decimal/float --> decimal/float.
* While we know that both decimal and binary floats have a
fixed internal precision and can be converted losslessly to
a rational, that doesn't correspond to the way we think about
them. We tend to think of floating point values as real numbers,
not as rationals.
* There is a notion of fractions being used for unrounded
arithmetic and floats operations being rounded arithmetic.
So, it doesn't make sense to create the illusion of an unrounded
result from inputs that we already subject to rounding.
* Backward compatibility. That is what the fractions module already
does and we haven't have any problems with it.
* The coercion logic for comparisons won't match the
coercion logic for arithmetic operations. The former
strives to be exact and to be consistent with hashing
while the latter goes in the opposite direction.
* Operations such as some_float + some_fraction are
subject to double rounding. The potentially produces
a different result than the single rounding in:
float(Fraction.from_float(some_float) + some_fraction)
More information about the Python-Dev