[Python-Dev] Mixing float and Decimal -- thread reboot

Raymond Hettinger raymond.hettinger at gmail.com
Sun Mar 21 20:31:57 CET 2010


On Mar 21, 2010, at 11:50 AM, R. David Murray wrote:

> On Sun, 21 Mar 2010 11:25:34 -0700, Raymond Hettinger <raymond.hettinger at gmail.com> wrote:
>> It seems to me that Decimals and floats should be considered at
>> the same level (i.e. both implement Real).
>> 
>> Mixed Decimal and float should coerce to Decimal because it can be
>> done losslessly.
>> 
>> There is no need to embed a notion of "imperfect answer".
>> Numbers themselves are exact and many mixed operations
>> can be exact is the coercions go the right way.
> 
> I think the concern here is rather about operations such as:
> 
>    1.1 + Decimal('1.1')
> 
> The calculation may produce an "exact" result, but it won't be the exact
> result expected, because the conversion from string (at the program text
> file level) to float was lossy.  Thus the desire for some mechanism to
> know that floats and decimals have been mixed anywhere in the calculations
> that led up to whatever result number you are looking at.  And to have
> doing so trigger an exception if requested by the programmer.

That makes sense.  That's why Guido proposed a context flag in
decimal to issue a warning for implicit mixing of decimals and floats.

What I was talking about was a different issue.  The question of
where to stack decimals in the hierarchy was erroneously 
being steered by the concept that both decimal and binary floats
are intrinsically inexact.  But that would be incorrect, inexactness
is a taint, the numbers themselves are always exact.

I really like Guido's idea of a context flag to control whether
mixing of decimal and binary floats will issue a warning.
The default should be to issue the warning (because unless
you know what you're doing, it is most likely an error).


Raymond



More information about the Python-Dev mailing list