
March 19, 2010
6:53 p.m.
Glenn Linderman <v+python <at> g.nevcal.com> writes:
So when a coder choose to use Decimal, it is because float is inappropriate. Because float is inappropriate, mixing Decimal and float is inappropriate. Having the language coerce implicitly, is inappropriate.
I'm sorry but this is very dogmatic. What is the concrete argument against an accurate comparison between floats and decimals?
Comparisons need to be done with full knowledge of the precision of the numbers. The additional information necessary to do so cannot be encoded in a binary operator.
This doesn't have anything to do with the mixing of floats and decimals, though, since it also applies to unmixed comparisons. Again, is there an argument specific to mixed comparisons?