
On Sun, Mar 21, 2010 at 11:25 AM, Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
Right. We should be guided by: fractions are a superset of decimals which are a superset of binary floats.
But mixed Fraction-float operations return floats, not Fractions.
And by: binary floats and decimal floats both implement all of the operations for the Real abstract base class.
Sure, but that doesn't help us decide what mixed Decimal-float operations should return.
It seems to me that Decimals and floats should be considered at the same level (i.e. both implement Real).
Agreed, but doesn't help. (Except against the idea that Decimal goes on the "integer" side of Fraction, which is just wrong.)
Mixed Decimal and float should coerce to Decimal because it can be done losslessly.
But mixed Fraction-float returns float even though returning Fraction could be done losslessly. The real criterion should be what's more useful, not what can be done losslessly.
There is no need to embed a notion of "imperfect answer". Numbers themselves are exact and many mixed operations can be exact if the coercions go the right way.
Division cannot, in general (I consider floor division a bastard child of the integers). And for multiplication it seems that rounding at some point becomes necessary since the alternative would be to use infinite precision.
Some folks who have had bad experiences with representation error (i.e. 1.1 cannot be exactly represented as a binary float) or with round-off error (i.e. 1.0 / 7.0 must be rounded) tend to think of both binary or decimal floats as necessarily inexact. But that is not the case, exact accounting work is perfectly feasable with decimals. Remember, the notion of inexactness is a taint, not an intrinsic property of a type. Even the Scheme numeric tower recognizes this. LIkewise, the decimal specification also spells-out this notion as basic to its design.
I really don't think advertising Decimal as having exact operations is the right thing to do. Sure, it is the right data type for all accounting operations -- but that is a very specialized use case, where certain round-off errors are desirable (since nobody wants fractional pennies in their bill).
I believe that no "clean-up" is necessary. Decimal already implements the Real ABC. All that is necessary is the common __hash__ algorithm and removing the restriction between decimal/float interaction so that any two instances of Real can interoperate with one another.
Call it by any name you want. We're looking at revising the hash function and allowing mixed operations between Decimal and float, with signal that warns about these operations. I see two open issues: whether mixed Decimal-float operations should return Decimal and float, and whether the warning about such operations should be on or off by default. My gut tells me that the signal should be off by default. My gut doesn't say much about whether the result should be float or Decimal, but I want the reasoning to reach a decision to be sound. -- --Guido van Rossum (python.org/~guido)