[Python-ideas] Python Numbers as Human Concept Decimal System

Oscar Benjamin oscar.j.benjamin at gmail.com
Thu Mar 6 13:51:01 CET 2014


On 6 March 2014 12:36, Stefan Krah <stefan at bytereef.org> wrote:
>
> Regarding decimal literals:
>
> Possible in 3.x, but it would require some investigation if
> people really want arbitrary precision arithmetic by default
> rather than e.g. IEEE Decimal64.

Interesting. I wasn't aware of Decimal64.

I don't understand your question/point though. The Decimal module
doesn't provide "arbitrary" precision arithmetic by default. It
provides 28 digit precision arithmetic by default. It does however
allow the creation of Decimal instances with effectively arbitrary
precision from strings, integers etc.

So I would assume that the idea was that nothing would change about
decimal arithmetic (by default or when importing the module). The
difference would just be that you could write 1.12345e-23d which would
be equivalent to Decimal('1.12345e-23').

In this way it would be possible with a literal to express any decimal
value exactly (even if it exceeds the arithmetic precision). It would
be possible to do e.g. "if x <= 0.00001d:" and know that the test was
exact.

> Regarding decimal floats by default:
>
> Perhaps in 4.0, but IMO only with *full* backing by the large SciPy
> and NumPy communities. My guess is that it's not going to happen, not
> in the least because the literature for floating point algorithms
> (with proofs!) has an overwhelming focus on binary floating point.

Yes, it's very hard to gauge the effect of something like this.
Understanding where in a numeric code base 64-bit binary floating
point is assumed would be harder than disentangling unicode, from
bytes, from 8-bit encodings I expect.


Oscar


More information about the Python-ideas mailing list