[Python-ideas] Python Float Update

Nick Coghlan ncoghlan at gmail.com
Mon Jun 1 09:08:40 CEST 2015


On 1 June 2015 at 16:27, Nicholas Chammas <nicholas.chammas at gmail.com> wrote:
> I always assumed that float literals were mostly an artifact of history or
> of some performance limitations. Free of those, why would a language choose
> them over decimal literals?

In a world of binary computers, no programming language is free of
those constraints - if you choose decimal literals as your default,
you take a *big* performance hit, because computers are designed as
binary systems. (Some languages, like IBM's REXX, do choose to use
decimal integers by default)

For CPython, we offer C-accelerated decimal support by default since
3.3 (available as pip install cdecimal in Python 2), but it still
comes at a high cost in speed:

$ python3 -m timeit -s "n = 1.0; d = 3.0" "n / d"
10000000 loops, best of 3: 0.0382 usec per loop
$ python3 -m timeit -s "from decimal import Decimal as D; n = D(1); d
= D(3)" "n / d"
10000000 loops, best of 3: 0.129 usec per loop

And this isn't even like the situation with integers, where the
semantics of long integers are such that native integers can be used
transparently as an optimisation technique - IEEE754 (which defines
the behaviour of native binary floats) and the General Decimal
Arithmetic Specification (which defines the behaviour of the decimal
module) are genuinely different ways of doing floating point
arithmetic, since the choice of base 2 or base 10 has far reaching
ramifications for the way various operations work and how various
errors accumulate.

We aren't even likely to see widespread proliferation of hardware
level decimal arithmetic units, because the "binary arithmetic is
easier to implement than decimal arithmetic" consideration extends
down to the hardware layer as well - a decimal arithmetic unit takes
more silicon, and hence more power, than a similarly capable binary
unit. With battery conscious mobile device design and environmentally
conscious data centre design being two of the most notable current
trends in CPU design, this makes it harder than ever to justify
providing hardware support for both in general purpose computing
devices.

For some use cases (e.g. financial math), it's worth paying the price
in speed to get the base 10 arithmetic semantics, or the cost in
hardware to accelerate it, but for most other situations, we end up
being better off teaching humans to cope with the fact that binary
logic is the native language of our computational machines.

Binary vs decimal floating point is a lot like the Unicode bytes/text
distinction in that regard: while Unicode is a better model for
representing human communications, there's no avoiding the fact that
that text eventually has to be rendered as a bitstream in order to be
saved or transmitted.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the Python-ideas mailing list