Against PEP 240
tim.one at home.com
Wed May 30 02:14:38 CEST 2001
> Rexx is also based on 7.35 meaning exactly 7.35, neither more
> nor less; it was the premier scripting language for both IBM
> mainframes and (I'm told) Amiga computers.
This is a convenient point to draw a distinction: Rexx (and almost all of
the others) still use floating-point in this case, but with a decimal
(rather than binary) base. So, yes, 7.35 is WYSIWYG, and 7.35/7 is exactly
1.05, but 7.35/17 is again just an approximation. PEP 240 wants much more
than that: it wants unbounded rationals, not just a decimal base.
Note that Aahz is making good progress behind the scenes implementing a
Python module for IBM's decimal arithmetic proposal (driven by Mike
Cowlishaw -- Rexx's dad):
> I'm not sure what various dialects of Lisp, Scheme, Prolog, Erlang,
> Haskell, and other non-classic languages do, but I'd bet that
> SOME of them take 7.35 as meaning 7.35 (aka 147/20, an exact
> rational number). I even suspect other scripting languages
> behave similarly, though I don't recall precisely.
ABC (Python's predecessor) took 7.35 as an exact rational. But it went way
beyond PEP 240: it took, e.g., 6.02e23 and 1.9187e-219 as meaning exact
rationals too. Guido deliberately didn't do that in Python, and as a fellow
victim of ABC's unpredictable time and space requirements I agreed with him
at the time (before Python 1.0 was released). But then you can blame both
of us for Python's integer division too <wink>.
The Scheme std says any undecorated numeric literal with a decimal point or
exponent is inexact. But #i and #e qualifiers can be attached, so that,
e.g., #e6.02e23 is exact despite that 6.02e23 is not, and #i42 is inexact
despite that 42 is exact. "#i" is nicely ambiguous: it can mean "inexact"
or "incorrect" <#e0.9107 wink>.
But I expect rationals are very unusual in commercial number crunching,
while variants of decimal fixed- and floating-point are extremely common.
That makes the IBM proposal (above) worth taking seriously (and Guido is
predisposed to do so -- after his first reading, he remarked that this
proposal appeared to achieve what they were *trying* to do with ABC
> Oh, read ALL Kahan has written, and if you emerge still
> thinking you KNOW what you're doing when floating point
> is involved, you're either Tim Peters, or the world champ
> of hubris.
I find it's possible to be both <wink>. But *nothing* about fp comes easily
to anyone, and even Kahan works his butt off to come up with the amazing
things that he does. As Knuth observed in TAoCP (vol 2, ed 3, pg 229):
Many serious mathematicians have attempted to analyze a sequence
of floating point operations rigorously, but found the task so
formidable that they have tried to be content with plausibility
It's a bitch. The great thing about the Rexx approach is that if you have a
reason to suspect your results, you can just run the same program again
asking for 2 (or 3, or 4, or ...) times the amount of precision. And that's
the *only* practical way I've ever seen for non-experts to get a handle on
either the existence or cause of numeric surprises.
The 754 committee was hoping to get a more modest version of the same thing
by recommending single-extended and double-extended formats, but while Intel
implemented double-extended in the Pentium line, terrible language support
rendered it worse than useless (you can never predict, e.g., when MSVC will
or won't use double-extended, so instead of a great safety net it turned
into just another source of unpredictable numeric surprise).
More information about the Python-list