Comparing float and decimal
dickinsm at gmail.com
Thu Sep 25 11:42:20 CEST 2008
On Sep 24, 6:18 pm, Terry Reedy <tjre... at udel.edu> wrote:
> If there is not now, there could be in the future, and the decimal
> authors are committed to follow the standard wherever it goes.
> Therefore, the safe course, to avoid possible future deprecations due to
> doing too much, is to only do what is mandated.
Makes sense. It looks as though the standard's pretty stable now
though; I'd be quite surprised to see it evolve to include discussion
of floats. But then again, people thought it was stable just before
all the extra transcendental operations appeared. :-)
> For integral values, this is no problem.
> >>> hash(1) == hash(1.0) == hash(decimal.Decimal(1)) ==
> hash(fractions.Fraction(1)) == 1
Getting integers and Decimals to hash equal was actually
something of a pain, and required changing the way that
the hash of a long was computed. The problem in a nutshell:
what's the hash of Decimal('1e100000000')? The number is
clearly an integer, so its hash should be the same as that
of 10**100000000. But computing 10**100000000, and then
finding its hash, is terribly slow... (Try
hash(Decimal('1e100000000')) in Python 2.5 and see
what happens! It's fixed in Python 2.6.)
As more numeric types get added to Python, this
'equal implies equal hash' requirement becomes more
and more untenable, and difficult to maintain. I also find
it a rather unnatural requirement: numeric equality
is, to me, a weaker equivalence relation than the one
that should be used for identifying keys in dictionaries,
elements of sets, etc. Fraction(1, 2) and 0.5 should, to my
eyes, be considered
different elements of a set. But the only way to 'fix' this
would be to have Python recognise two different types of
equality, and then it wouldn't be Python any more.
The SAGE folks also discovered that they couldn't
maintain the hash requirement.
> Decimals can also be converted to floats (they also have a __float__
> method). But unlike fractions, the conversion must be explicit, using
> float(decimal), instead of implicit, as with ints and fractions.
Maybe: if I *had* to pick a direction, I'd make float + Decimal
produce a Decimal, on the basis that Decimal is arbitrary precision
and that the float->Decimal conversion can be made losslessly.
But then there are a whole host of decisions one has to make
about rounding, significant zeros, ... (And then, as you point
out, Cowlishaw might come out with a new version of the standard
that does include interactions with floats, and makes an entirely
different set of decisions...)
More information about the Python-list