[Python-Dev] Revamping Python's Numeric Model
Mon, 6 Nov 2000 01:34:38 -0500
> So I go offline for a couple of days to entertain guests and have my
> body kicked around in a dance class, and I have 25 messages discussing
> Python's numeric model waiting for me...
The scary thing is which one of those you clearly enjoyed more <wink>.
> I was hoping that Tim would chime in, but he's apparently taken the
> weekend off -- very much out of character. :-)
Exactly in character, alas: I was obsessed with my new cable modem
connection. I had years of stuff to learn about firewalls in two days --
not to mention years of pornography to download in one <wink>.
Some quickies for now:
+ Konrad Hinsen needs to be sucked in. He's been arguing for a "unified"
numeric model forever.
+ Everyone has IEEE-754 fp hardware today; some people actually want to use
it; Moshe doesn't, but whatever revamping we get needs to allow others their
> For example, Tim has conjectured that using binary floating point will
> always be a problem for the "unwashed masses" -- the only thing they
> might understand is decimal floating point,
At first glance, yes. Complaints traced to the "binary" part of "binary fp"
vastly outnumber complaints due to the "fp" part *and* integer division
combined, on both Python-Help and the Tutor list. So if we want to know
what actually trips up newbies, they've been telling us for years. Decimal
fp would silence most of those complaints; but rationals would silence them
too (provided they're *displayed* in rounded decimal fp notation (restart
"str" vs "repr" rant, and that the interactive prompt uses the wrong one,
and ditto str(container))), plus a few more (non-obvious example:
does not equal 1 in either decimal or IEEE-754 binary double fp, but does
equal 1 using rationals).
Note that Mike Cowlishaw (REXX's dad) has been working on a scheme to merge
REXX's decimal fp with IEEE-854 (the decimal variant of IEEE-754):
I'll talk to Jim Fulton about that, since Cowlishaw is pushing a BCD variant
and Jim was wondering about that (around the change of the year, for use-- I
presume --in Zope).
Note also IBM's enhanced BigDecimal class for Java:
> Another issue that I might bring up is that there are no inexact
> numbers (each floating point number is perfectly exact and rational)
> -- there are only inexact operations. I'm not sure what to do with
> this though.
IEEE-754 defines exactly what to do with this, for binary floats (and your
hardware has an "inexact result" flag set or not after every fp operation).
Conversion of the string "1.0" to float must not set it; conversion of "0.1"
must set it; and similarly for + - * / sqrt: "inexact result" gets set
whenever the infinitely precise result differs from the computed result. So
inexactness there is neither a property of types nor of numbers, but of
specific computations. Extreme example:
x = 1./3. # inexact
y = x-x # exact result (from inexact inputs!)
I know that this version (operation-based) of inexactness can be useful. I
see almost no earthly use for calling every number of a given type inexact.
Tagging individual numbers with an exact/inexact bit is an extremely crude
form of interval arithmetic (where the intervals are single points or
> I'll leave it to Tim to explain why inexact results may not be close
> to the truth.
> Tim may also break a lance for IEEE 754.
Somebody else on c.l.py offered to write a 754 PEP; delighted to let them
but-if-you-don't-they'll-run-out-of-time-or-memory-ly y'rs - tim