Future floating point directions? [was Re: floating point in 2.0]
Edward Jason Riedy
ejr at cs.berkeley.edu
Tue Jun 12 13:52:01 EDT 2001
And Tim Peters writes:
-
- Which is why I expect Python will sooner grow an 854 story, indeed doing
- decimal fp entirely in software.
FYI, 854 is expiring. The decimal portions are being folded into
an appendix in 754R. The few differences are being resolved in
854's favor.
And decimal floating-point has its own share of problems. An error
of one ulp becomes substantial, and there are far more rounding
options. It's a trade of one expected thing for other unexpected
things.
- new stds are adopted slowly, which is why you shouldn't be so timid
- about requiring what you really want in the end --
Good point; I think I'll steal it at the next meeting. ;) But
people weren't sure if correct decimal<->binary was even possible
at that point, so it's fair for 754 to have dodged... That's also
why we're dodging on elementary functions. Other people are still
trying to find reasonable requirements for those.
- gcc is important, but before Python can rely on a thing literally dozens of
- compilers have to support it, and correctly.
Support in gcc is a forcing function. A few proprietary
compiler people have said they could easily justify spending
money on something gcc supports.
- That's 754's fault, because if we had rotten addition they could
- have blamed the accounting discrepancies on the hardware <wink>.
Heh. I was at UF when KSR tried shutting down the machine we had.
That was amusing. (What happened to KSR was a shame, though. Nice
machine, even with the one magic processor that would bring the
whole thing down.)
- The paying market for these modes isn't worth the bother.
I'm not sure. Sun folks say various additional rounding methods
are wanted by financial groups, and they're willing to pay a bit.
Perhaps not enough, but I don't know the details.
- > I do wish software had access to the three extra bits necessary to
- > round correctly.
-
- So mandate them <0.7 wink>.
It's tempting to bring that up. Very tempting. But then I
also want per-register sticky flags. I really want those.
(They'd make it easy to interleave calculations while
maintaining 754 semantics. IA64 has four FP control / flag
words for that reason...)
- The scientific market-- unlike the Linux market --is a shrinking piece of
- the overall pie, though. I'd say binary fp is perceived by the bulk of the
- market as a necessary evil for now, but I'm not sure that will last either.
It will. Decimal has some odd problems of its own, and we'd just
have to go through this all again. Even ARM has a FPU, due to
customer demand. (Looks _nice_, too.)
- Which modern chips do you have in mind that punt on required 754 pieces? I
- know you view 754 as "a systems standard", but most of the world doesn't:
The first paragraph is very clear on that matter. Last sentence
reads:
Hardware components that require software support to conform
shall not be said to conform apart from such software.
Thus, any gaps are required to be filled. Very, very few
architectures handle _everything_ in hardware. If you count the
required trap handling portions, then no general-purpose hardware
can possibly implement all required portions of 754.
That's why I don't feel requirements are much incentive. I
understand your point about hardware support, but the primary
hardware and requirements are there and have been. The
hardware is now starting to _lose_ features because higher
levels have refused to support them. Even when a group comes
up with nice, pretty easy to implement libraries (Sun's C9x
proposals), they get shot down.
- (which is the IEEE-like quad format Sun and KSR both implemented
- in the early 90's, so I assume it's still in favor).
Yup; that's the defacto quad. It's in most architectures as
a future possibility now. Sun's compiler now inserts the
correct code directly rather than relying on unimplemented insn
traps.
- (apparently after the point you stopped reading <wink>):
No, I was looking in a draft. The section's been re-worded,
and the clause was moved into the new parts, and I kinda assumed
it was new. Teaches me for working with pre-alpha specs. ;)
- There's not enough precision in a single to enter some programmers' salaries
- without catastrophic loss of information <0.9 wink>. It's simply too small
- for general use.
So pay 'em less, give the extra to me, and I'll lobby for a wider
single. ;) Financials are an interesting case in general. I
see your point, but to me it's more an argument for using quad
intermediates when double is necessary than ignoring the existance
of single.
- Not in my world. In 15 years at Cray and KSR, neither company built a box
- with HW support for any fp format other than 64 bit.
No offense, but Cray isn't exactly known for caring about
floating-point quality. And as a T3E user, the obsession
with 64-bit-everything can really, really suck. (When I
used a KSR, I didn't really know the difference. But then
I mostly worked on non-FP parallel toys.)
- > Single precision is for limiting the effects of rounding.
-
- Why do I need a distinct storage format for that?
Good point. hm. Now why didn't I think of that solution... It
could work really well with an assertion, and that would generalize
to intervals well. thanks! Think I may need to use that idea...
- Some DSPs get by with 16-bit integer "block fp" -- but they know what
- they're doing, and are designed with specific algorithms in mind.
Yeah. I mostly mean the TI series that supports 754. They also
have a single-extended, but it's not 754 single-extended. The
exponent range stays the same, and they increase the sig. bits.
(Ignoring the relationship to your argument... ;) )
- I look forward to the PEP <wink>. Seriously, I don't know how to
- implement this in Python.
I'm not entirely sure yet, either. I want the ability for other
reasons (run-time optimization). And it'll be a general paper
first, but somehow I need to add ~10 pages to the 20 I already
have and end up with at most 10... That'll require interesting
arithmetic.
- It may also be more promising. For example, a new kind of block,
-
- with fp_context=double_extended:
- all intermediates in the block use double extended
Which would suck if one of them were quad. But this type of
block is a good idea. Have you looked over Darcy's Borneo?
- IBM's Standard Decimal Arithmetic Proposal requires user-settable precision
- (akin to, e.g., x86's "precision control" fpu bits) as part of numeric
- context, and it's also a puzzle how to work that into Python smoothly.
It's a puzzle how that proposal fits into anything other than
Rexx smoothly.
- In the Java version it looks like a context argument is explicitly
- passed to every operation -- brrrrrr.
Yeah, the developers (Darcy at sun and Cowlishaw at ibm) hate that too,
but trying to change Java is even harder... Steele's all for
changing Java to support these things, but he's also for re-
introducing UN and OV. Yech.
- They're not arbitrary-length, they're fixed-length, but of user-settable
- length (fall back on your own view of static vs dynamic here!). "Entirely
- different" seems a stretch, as they're as much "as if to infinite precision
- with one final rounding" as 754 semantics.
Ah, I thought they were the arbitrary-length-until-you-do-something-
interesting variety. I know those are in a GMP derivative, so I
assumed they were in GMP... I have no real problem with fixed
precisions. Some people want to standardize longer fixed precisions,
but I don't think we've come to much agreement there.
- I don't see a reason "in theory" that intervals couldn't be contagious too.
They pretty much need to be, including decimal->binary conversions.
- But this is all so far from current Python it's too unreal to get picky
- about.
Hey, I'm in academia. It's what I do. ;)
- > On the flip side, complex really shouldn't be silently mixed
- > with real at all.
-
- Since the way to spell a complex literal in Python does mix them (like
- 3+4*j: that's an honest-to-god + and * there, not a lexical gimmick),
- that's a hard rule to adopt <wink>.
And turning it into a lexical gimmick would hurt how many codes?
Silently mixing complex and reals is dangerous. It's almost as
bad as Matlab's 1x1 matrix <-> scalar problem.
- Is this special case important enough to stop mountains of code that mix
- real and complex without incident?
No, but the list of other cases may be. This is just the one
I can defend the best, being a linear algebra person. There are
more from various polynomials.
- Guido's view of matrix algebra, and complex numbers, be they in combination
- or isolation, is that they're not on a newbie's radar. "Don't ask, don't
- tell." This is something to take up with the NumPy folks.
argh. They are when the person is first learning them. I keep
wanting to recommend Python over Matlab, which would also give a
way to link everyday programming with numerics, but the only
thing Python offers is a bit of speed (and freedom, but that
argument goes nowhere here). The separate cmath does attenuate
the problem significantly, though.
- Any change to Python's core binary fp numerics would likely require
- near-universal consensus -- and I just don't see that happening.
sigh. Death a-la Scheme.
Jason, who promises to shut up until there's a better write-up
and hopefully code (though likely ocaml first) available...
--
More information about the Python-list
mailing list