When I teach, I usually present this to students:

>>> (0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
False

This is really easy as a way to say "floating point numbers are approximations where you often encounter rounding errors"  The fact the "edge cases" are actually pretty central and commonplace in decimal approximations makes this a very easy lesson to teach.

After that, I might discuss using deltas, or that better yet is math.isclose() or numpy.isclose().  Sometimes I'll get into absolute tolerance versus relative tolerance passingly at this point.

Simply because the edge cases for working with e.g. '0xC.68p+2' in a hypothetical future Python are less obvious and less simple to demonstrate, I feel like learners will be tempted to think that using this base-2/16 representation saves them all their approximation issues and their need still to use isclose() or friends.



On Thu, Sep 21, 2017 at 8:14 PM, Tim Peters <tim.peters@gmail.com> wrote:
[David Mertz <mertz@gnosis.cx>]
> -1
>
> Writing a floating point literal requires A LOT more knowledge than writing
> a hex integer.

But not really more than writing a decimal float literal in
"scientific notation".  People who use floats are used to the latter.
Besides using "p" instead of "e" to mark the exponent, the only
differences are that the mantissa is expressed in hex instead of in
decimal, and the implicit base to which the exponent is applied is 2
instead of 10.

Either way it's a notation for a rational number, which may or may not
be exactly representable in the native HW float format.


> What is the bit length of floats on your specific Python compile? What
> happens if you specify more or less precision than actually available.
> Where is the underflow to subnormal numbers?

All the same answers apply as when using decimal "scientific
notation".  When the denoted rational isn't exactly representable,
then a maze of rounding, overflow, and/or underflow rules apply.  The
base the literal is expressed in doesn't really have anything to do
with those.


> What is the bit representation of [infinity]? Nan?

Hex float literals in general have no way to spell those:  it's just a
way to spell a subset of (mathematical) rational numbers.  Python,
however, does support special cases for those:

>>> float.fromhex("inf")
inf
>>> float.fromhex("nan")
nan


> -0 vs +0?

The obvious first attempts work fine for those ;-)


> There are people who know this and need to know this. But float.fromhex() is
> already available to them. A literal is an attractive nuisance for people
> who almost-but-not-quite understand IEEE-854.

As notations for rationals, nobody needs to understand 854 at all to
use these things, so long as they stick to exactly representable
numbers.  Whether a specific literal _is_ exactly representable, and
what happens if it's not, does require understanding a whole lot - but
that's also true of decimal float literals.

> I.e. those people who named neither Tim Peters nor Mike Cowlishaw.

Or Mark Dickinson ;-)

All that said, I'm -0 on the idea.  I doubt I (or Mark, or Mike) would
use it, because the need is rare and float.fromhex() is already
sufficient.  Indeed, `fromhex()` is less annoying, because it does
support special cases for infinities and NaNs, and doesn't _require_ a
"0x" prefix.



--
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.