# [Python-Dev] Mixing float and Decimal -- thread reboot

Guido van Rossum guido at python.org
Fri Mar 19 22:50:04 CET 2010

```I'd like to reboot this thread. I've been spinning this topic in my
head for most of the morning, and I think we should seriously
reconsider allowing mixed arithmetic involving Decimal, not just mixed
comparisons. [Quick summary: embed Decimal in the numeric tower but
add a context flag to disallow implicit mixing of float and Decimal.]

I tried to find the argumentation against it in PEP 327 (Decimal Data
Type) and found that it didn't make much of an argument against mixed
arithmetic beyond "it's not needed" and "it's not urgent". (It even
states that initially Decimal.from_float() was omitted for simplicity
-- but it got added later.)

We now also have PEP 3141 (A Type Hierarchy for Numbers) which
proposes a numeric tower. It has an explicit exclusion for Decimal,
but that exclusion is provisional: "After consultation with its
authors it has been decided that the ``Decimal`` type should not at
this time be made part of the numeric tower." That was a compromise
because at the time some contributors to Decimal were fiercely opposed
to including Decimal into the numeric tower, and I didn't want to have
a endless discussion at the time (there were many more pressing issues
to be resolved).

However now the subject is coming up again, and my gut keeps telling
me that Decimal ought to be properly embedded in Python's numeric
tower. Decimal is already *touching* the numeric tower by allowing
mixed arithmetic with ints. This causes the anomaly that Mark
mentioned earlier: the three values 1, 1.0 and Decimal(1) do not
satisfy the rule "if x == y and y == z then it follows that x == z".
We have 1.0 == 1 == Decimal(1) but 1 == 1.0 != Decimal(1). This also
causes problems with hashing, where {Decimal(1), 1, 1.0} !=
{Decimal(1), 1.0, 1}.

I'd like to look at the issue by comparing the benefits and drawbacks
of properly embedding Decimal into the numeric tower. As advantages, I
see consistent behavior in situations like the above and more
intuitive behavior for beginners. Also, this would be a possible road
towards eventually supporting a language extension where floating
point literals produce Decimal values instead of binary floats. (A
possible syntax could be "from __options__ import decimal_float",
which would work similar to "from __future__ import ..." except it's a
permanent part of the language rather than a forward compatibility
feature.)

As a downside, there is the worry that inadvertent mixing of Decimal
and float can compromise the correctness of programs in a way that is
hard to detect. But the anomalies above indicate that not fixing the
situation can *also* compromise correctness in a similar way. Maybe a
way out would be to add a new flag to the decimal Context class
indicating whether to disallow mixing Decimal and float -- that way
behavior can be more intuitive. Possibly the flag should not affect
comparisons.

There is one choice which I'm not sure about. Should a mixed
float/Decimal operation return a float or a Decimal? I note that
Fraction (which *is* properly embedded in the numeric tower) supports
this and returns a float result in this case. While I earlier proposed
to return the most "complicated" type of the two, i.e. Decimal, I now
think it may also make sense to return a float, being the most "fuzzy"
type in the numeric tower. This would also make checking for
accidental floats easier, since floats now propagate throughout the
computation (like NaN) and a simple assertion that the result is a
Decimal instance suffices to check that no floats were implicitly
mixed into the computation.

The implementation of __hash__ will be complicated, and it may make
sense to tweak the hash function of float, Fraction and Decimal to
make it easier to ensure that for values that can be represented in
either type the hash matches the equality. But this sounds a
worthwhile price to pay for proper embedding in the numeric tower.

--
--Guido van Rossum (python.org/~guido)
```