[Python-ideas] float vs decimal

Chris Rebert pyideas at rebertia.com
Wed Mar 11 22:39:31 CET 2009

On Wed, Mar 11, 2009 at 2:32 PM, William Dode <wilk at flibuste.net> wrote:
> On 11-03-2009, Chris Rebert wrote:
>> On Wed, Mar 11, 2009 at 12:25 PM, William Dode <wilk at flibuste.net> wrote:
>>> Hi,
>>> I just read the blog post of gvr :
>>> http://python-history.blogspot.com/2009/03/problem-with-integer-division.html
>>> And i wonder why
>>>>>> .33
>>> 0.33000000000000002
>>> is still possible in a "very-high-level langage" like python3 ?
>>> Why .33 could not be a Decimal directly ?
>> I proposed something like this earlier, see:
>> http://mail.python.org/pipermail/python-ideas/2008-December/002379.html
>> Obviously, the proposal didn't go anywhere, the reason being that
>> Decimal is currently implemented in Python and is thus much too
>> inefficient to be the default (efficiency/practicality beating
>> correctness/purity here apparently). There are non-Python
>> implementations of the decimal standard in C, but no one could locate
>> one with a Python-compatible license. The closest was the IBM
>> implementation whose spec the decimal PEP was based off of, but
>> unfortunately it uses the ICU License which has a classic-BSD-like
>> attribution clause.
> Thanks to resume the situation.
> I think of another question. Why it's so difficult to mix float and
> decimal ?
> For example we cannot do Decimal(float) or float * Decimal.
> And i'm afraid that with python 3 it will more often a pain because
> operation with two integers can sometimes return integer (and accept an
> operation with decimal) and sometimes not when the operation will return
> a float.
>>>> a = Decimal('5.3')
>>>> i = 4
>>>> j = 3
>>>> i*a/j
> Decimal('7.066666666666666666666666667')
>>>> i/j*a
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: unsupported operand type(s) for *: 'float' and 'Decimal'
> I mean, why if a Decimal is in the middle of an operation, float didn't
> silently become a Decimal ?

It's in the FAQ section of the decimal module -
http://docs.python.org/library/decimal.html :

17. Is there a way to convert a regular float to a Decimal?
A. Yes, all binary floating point numbers can be exactly expressed as
a Decimal. An exact conversion may take more precision than intuition
would suggest, so we trap Inexact to signal a need for more precision:
  def float_to_decimal(f):
    [definition snipped]

17. Why isn’t the float_to_decimal() routine included in the module?
A. There is some question about whether it is advisable to mix binary
and decimal floating point. Also, its use requires some care to avoid
the representation issues associated with binary floating point:
>>> float_to_decimal(1.1)


I have a blog:

More information about the Python-ideas mailing list