[Python-ideas] Python Numbers as Human Concept Decimal System

Oscar Benjamin oscar.j.benjamin at gmail.com
Sat Mar 8 21:32:58 CET 2014


On 8 March 2014 18:59, Guido van Rossum <guido at python.org> wrote:
> On Sat, Mar 8, 2014 at 10:11 AM, Oscar Benjamin <oscar.j.benjamin at gmail.com> wrote:
>>
>> On 8 March 2014 16:54, Guido van Rossum <guido at python.org> wrote:
>>
>> If you write Decimal(1.1) and are surprised by the result then you
>> have misunderstood something.
>
> Ah, but I'm not surprised. I'm unsatisfied. I understand what led to the
> result, but it's still not what I want, and it's a pain to train myself to
> do the extra thing that gives me what I want.

That "you" wasn't directed at *you* personally (and neither are the ones below).

If you're using the Decimal module then it is precisely because you do
need to care about rounding/accuracy etc. If you want to use it but
aren't prepared to take the effort to be careful about how to pass
exact input to the constructor then you're basically taking an
impossible position that no one else can help you with.

>> In any case if the result of Decimal(1.1) surprises you then it's
>> because you're expecting it do something that should be done in a
>> different way. Hiding the extra digits does not help a user to
>> understand how to use Decimal.
>
> But does showing the extra digits do anything to help? It's just as likely
> to teach them a trick (add quotes or a str() call) without any new
> understanding.

It's enough to make you think "Why did that happen?". It's clear when
you see those digits that you have not created an object with the
exact value that you wanted. The obvious next question is "How do I
make it do what I want?". The docs can lead you very quickly to the
correct way of doing it:
http://docs.python.org/3.4/library/decimal.html#quick-start-tutorial

>> There is a good solution to the problem of non-experts wanting to
>> write 1.1 and get the exact value 1.1: decimal literals. With that
>> they can just write 1.1d and not need to learn any more about it.
>
> You're right that I dismissed it too quickly. 3.14d is clearly even better
> than Decimal(3.14) doing the right thing. It is also still a lot more work
> (touches many parts of the code rather than just the Decimal class).
>
> Also I didn't realize that the C-implemented decimal module was already used
> in CPython (so I thought it would be even more work).

As Stefan mentioned earlier there are other issues to resolve around
how a decimal literal should work. It's not obvious that there should
be a straight-forward equivalence between 3.14d and D('3.14'). Perhaps
there should be new thread to consider how to potentially do that (and
whether or not it's worth it).

> But I still have this nagging feeling that the precision Decimal(<float>)
> currently gives you is, in a sense, *fake*, given that the input has much
> less precision.

That depends what you use it for. My most common reason for converting
a float to a Decimal is to test the accuracy of a float-based
calculation by comparing it against the corresponding much
higher-precision decimal calculation e.g.:

    with localcontext() as ctx:
        ctx.prec = 100
        error = f(D(x)) - D(f(x))

For this I want the constructor to give me the exact value of the float x.


Oscar


More information about the Python-ideas mailing list