[Python-ideas] Hexadecimal floating literals

Nick Coghlan ncoghlan at gmail.com
Fri Sep 22 00:20:45 EDT 2017


On 22 September 2017 at 13:38, Guido van Rossum <guido at python.org> wrote:
> On Thu, Sep 21, 2017 at 8:30 PM, David Mertz <mertz at gnosis.cx> wrote:
>>
>> Simply because the edge cases for working with e.g. '0xC.68p+2' in a
>> hypothetical future Python are less obvious and less simple to demonstrate,
>> I feel like learners will be tempted to think that using this base-2/16
>> representation saves them all their approximation issues and their need
>> still to use isclose() or friends.
>
> Show them 1/49*49, and explain why for i < 49, (1/i)*i equals 1 (lucky
> rounding).

If anything, I'd expect the hex notation to make the binary vs decimal
representational differences *easier* to teach, since instructors
would be able to directly show things like:

    >>> 0.5 == 0x0.8 == 0o0.4 == 0b0.1 # Negative power of two!
    True
    >>> (0.1 + 0.2) == 0.3 # Not negative powers of two
    False
    >>> 0.3 == 0x1.3333333333333p-2
    True
    >>> (0.1 + 0.2) == 0x1.3333333333334p-2
    True

While it's possible to provide a demonstration along those lines
today, it means writing the last two lines as:

    >>> 0.3.hex() == "0x1.3333333333333p-2"
    True
    >>> (0.1 + 0.2).hex() == "0x1.3333333333334p-2"
    True

(Which invites the question "Why does 'hex(3)' work, but I have to
write '0.3.hex()' instead"?)

To illustrate that hex floating point literals don't magically solve
all your binary floating point rounding issues, an instructor could
also demonstrate:

    >>> one_tenth = 0x1.0 / 0xA.0
    >>> two_tenths = 0x2.0 / 0xA.0
    >>> three_tenths = 0x3.0 / 0xA.0
    >>> three_tenths == one_tenth + two_tenths
    False

Again, a demonstration along those lines is already possible, but it
involves using integers in the rational expressions, rather than
floats.

Given syntactic support, it would also be reasonable for the
hex()/oct()/bin() builtins to be expanded to handle printing floating
point numbers in those formats, and for floats to gain support for the
corresponding print formatting codes.

So overall, I'm still +0, on the grounds of improving int/float API consistency.

While I'm sympathetic to the concerns about potentially changing the
way the binary/decimal representation distinction is taught for
floating point values, I don't think having better support for more
native representations of binary floats is likely to make that harder
than it already is.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the Python-ideas mailing list