floating point in 2.0

Edward Jason Riedy ejr at cs.berkeley.edu
Fri Jun 8 13:00:47 EDT 2001


And Tim Peters writes:
 - 
 - Ack -- if you folks haven't learned anything else <wink> from the 16 
 - years of half-baked 754 support so far, believe me when I tell you 
 - that when it comes to f.p., most vendors see "strongly recommended" 
 - and think "OK, we can skip this one -- next.".

Hell, they do that if it's required.  One of the good-and-bad
points of 754 is that it's a _system_ standard.  It's good in
that it can't be undercut by some portion of the system.  It's
bad because each part simply points to the other parts for
compliance.  

For example, 754 requires single precision if double precision
is supported, but a certain relevant language doesn't provide
it.  (Well, ok, you can use Numpy with a one-element array.)

 - Make it required or don't bother -- really.

We also have to consider how we're getting this through the
IEEE process.  If we come out and say C99 ain't 754 (it mandates
6 digit output by default), well, I think some people will be
upset.

 - Vendors don't compete on mere "quality of implementation" f.p issues 
 - because their customers overwhelmingly can't tell the difference so 
 - don't care.

That's not entirely true.  Consider IBM.  They added fused 
multipy-accumulate (being added to 754) and a ton of diagnostics 
that are not required.  It wasn't just out of the goodness of 
their hearts.

Similarly, Sun has a host of easy-to-use exception handling 
libraries.  So far, I'm the only confirmed user outside Sun.  The 
libraries aren't going away because the internal support folks use 
them to diagnose client problems.

 - That's why you have near-universal support for denorms and directed 
 - rounding modes today, [...]

Denorms, yes, but who supports directed rounding modes?  Currently
only C99 is the only major language to give reasonable (arguable) 
access to them.  System standard, not hardware standard.

Oh, and actually, _no one_ supports denorms as intended.  There's
a clause (don't have it handy) that can be read a few different
ways.  The author intended one reading, but all implementors took
another.  ;)  And it looks like there's actually a bug in one
reading where you can have an inexact answer without an inexact 
flag.  I seem to have lost my notes on it, which is rather upsetting.
Denorms will see some simplification as a result of all this; we're
not sure what yet.

 - but virtually nothing that makes intended ("recommended") use of the 
 - NaN bits for retrospection, and see most vendors punt on implementing 
 - ("recommended") double-extended.

We're arguing about the NaN bits.  I think we've decided that 
operations should return the input NaN with the least significand, 
but it will only be a serious recommendation this time.  We know
perfectly well that people will ignore it; we've been told point-
blank.  There is some verbal support from within Sun, though.

This is only useful when combined with some trap-handling options.
They're up for argument in a few meetings, but everyone should be
happy with the proposal.

 - Even if I ask for, e.g., a %.55g format?

No.  ``By default'' would apply in the %g case.  When a user 
specifies something specific, the user gets something specific.

 - but they did this work in the context of the then-pending Scheme 
 - standard.

Don't taint the work by its purpose.  ;)  Scheme has severely
borken numeric support.  (No, really, arbitrarily long integers
are _not_ a subtype of fixed floating-point precisions.)

 - After their paper appeared, IIRC it got a chilly reception
 - on David Hough's numeric-interest mailing list, because it's not 
 - "properly rounded" in a clearly explainable sense.

Hmm.  I recall people brought this up towards the end of the
last meeting.  Most people there were interested in having
a default format that ensures different numbers are printed
differently and the same numbers are printed the same way.
Dealing with truncated outputs is still on the table, I think.

 - "Worth it" surely depends on the goal.  The knock against Steele & 
 - White's version of this is sloth, not complication; David Gay's 
 - faster version can be fairly characterized as (very) complicated, 
 - though.

True.  I tend to assume Gay's version.

 - Provided that the source and destination formats have the same precision,
 - and assuming correctly-rounded decimal->binary conversion too.

All decimal->binary conversions are correctly rounded.  That's
a new requirement, but the language is still in progress.  Binary
to decimal is constrained so that binary -> decimal -> binary 
produces the same number if both binary numbers are in the same
precision.  Dealing with NaNs and infinities is the real problem.
I want to avoid specifying a textual representation to avoid the
character encoding mess.  We'll see.

 - Very good advice indeed.  What if they're accumulating in double instead?

Then use quad.  ;)  It's being added.  The point is that languages
should use lower precision by default for stored values and a 
common higher precision for intermediate values.  The ``rules of 
thumb'' proposed by a certain numerical analyst aren't perfect for 
all cases, but they help the majority of computations.  They can 
be summarized as:

	* Store known data narrowly,
	* compute intermediates widely, and
	* derive properties widely from known data.

See the end of http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf for
more verbosity.  There are quite a few subtleties in these rules;
I'm trying to make them a bit more explicit.  Plus, I'm trying to
figure out how languages can best support them.  

Offering more than one precision is a must.  And then using a
widest-feasible evaluation strategy (widest-need with a lower
bound on precision) would help maintain expectations.  It also
helps with dispatching and dealing with decimal literals.

 - Without reliable access to double-extended, I'm afraid we're left with
 - obscure transformations.

Um, I think you mean ``[w]ithout reliable access to single''.  ;)
Python's default really should be single precision with at least
double precision for all intermediates.  With that and Steele /
White / Gay's output for default, I bet there would be far fewer
fp questions.  Many things would just work.  

(I think I posted a more reliable summation in Python a while 
back, too.  Anyone looking for good summation algorithms should 
head to Higham's paper on the subject:
  http://citeseer.nj.nec.com/higham93accuracy.html )

A widest-feasible evaluation scheme could get rid of the whole
right-method v. left-method v. coerce nonsense, too, but at the
cost of changing the code generation and interface considerably.
It generalizes to intervals cleanly, but arrays take a bit more
thought.

Jason
-- 



More information about the Python-list mailing list