Builtin Float Epsilon? (was: Re: Does python suck or I am just stupid? )
tim_one at email.msn.com
Sun Feb 23 09:15:15 CET 2003
> In practice, I think it boils down to: floating-point is hard, and
> there is no "royal road" that will shield programmers who choose
> to use floating-point from undestanding what they're doing, even
> though strange little anomalies will probably keep surfacing. I
> _think_ it follows, as night follows day, that Python should NOT
> foist floating-point on unsuspecting users who do NOT really know
> what they're doing in the matter (over 90% of us, I fear) -- e.g.,
> true division and decimal literals should map to fixed-point or
> rational types,
Noting that fixed-point is also subject to rounding errors (e.g., division
can't help needing to round in a fixed-point scheme).
> and floating-point should only be used when it is required explicitly.
> Unfortunately Guido disagrees (and his opinion trumps mine, of course),
> because "shielding user from floating point" was what ABC, Python's
> precursor language, did,
ABC eventually grew a floating type, but there were no floating literals.
The unary prefix operator ~ converted a numeric operand to a float (although
it was called something other than a float). So, e.g., 1.02e-300 was an
exact rational in ABC, while ~1.02e-300 was a float.
> and floating point is SO much faster than the alternatives (as it
> can exploit dedicated hardware present in nearly every computer
> of today) that defaulting to non-floating point for non-integer
> numerical calculations might be perceived by naive users as an
> excessive slowing-down of their programs, if said programs perform
> substantial amounts of numeric computation. Oh well.
That was the case in ABC, and the time and memory burdens were severe. I'm
not sure why that was a common experience in ABC, when it doesn't seem to be
in, e.g., full-featured Scheme implementations. My best guess has been
that, unlike Scheme, ABC considered numbers in exponential notation to be
exact rationals, and anyone with a lick of programming experience before
coming to ABC never guessed that this would lead to simple linear-time
numeric algorithms turning into quadratic-time (or worse) ones. Massive
growth in time and/or memory consumption is also extremely frustrating for
newbies, of course.
I don't believe there are any surprise-free forms of computer arithmetic.
IBM's proposal for decimal arithmetic is chock full of surprises too, but a
lifetime of experience with paper-and-pencil decimal calculations, and with
hand calculators, hardens people against the specific kinds of surprises it
offers. At heart, there's nothing in binary floating-point that's more
surprising than that 10/3 = 3.3333333 in an 8-digit decimal fp system, and
that 1/7 isn't exactly representable, but people are "just used to" the
errors in decimal math.
More information about the Python-list