Dynamic precisions [was Re: Comment on PEP-0238]
Edward Jason Riedy
ejr at cs.berkeley.edu
Tue Jul 10 22:31:24 EDT 2001
And Tim Peters writes:
-
- A gloss on this:
-
- > If I write
- >
- > precision 3
- > x = f() # some function that uses int division
- >
- > should every calculation in f() be affected by the precision
- > statement?
-
- Believe it or not, probably "yes". REXX has tons of experience with this,
- and life works best if (a) everyone respects precision, and (b) users rarely
- fiddle it except once at the start of a program run.
-
- That said, f()'s author also needs to make a choice here. Most *library*
- routines should strive to return a *result* good to strictly less than 1 ULP
- wrt the precision in effect at the time they were called, even if that
- requires temporarily boosting precision internally. This kind of construct
- would be common in well-written libraries:
For others, as Tim already knows: This use of extended precision
is common for calculating elementary functions (sine, pow, arctan,
etc.). See Markstein's book[1] for a good discussion. Note that
this class of methods needs a single additive increase to the
precision, often a small one.
For everyone: ;)
Some growing uses of extended precision:
* A fix-up phase that requires twice the initial precision.
e.g.: Iterative refinement in linear systems. See a
numerical linear algebra text or the XBLAS work[2].
* Repeating a calculation until you know some predicate is
_correct_.
e.g.: Determining orientation in geometric calculations.
See Shewchuk's predicates in CMU's Quake project[3].
Most 'new' uses of extended precision require relative precision
adjustments. Moreover, they often need to multiply the precision
by some power of sqrt(2) each time. (For binary. I'm not sure
about decimal. Ulp-sized errors behave much differently in
decimal.)
So currently there seem to be three classes of extended precisions
widely needed:
1) Adding a small amount to the current precision once.
2) Doubling precision for 'internal' calculations.
3) Repeating a calculation with about sqrt(2) times the
precision.
The first is often a matter of performance. Extra precision
can lead to a simpler algorithm. I've never seen it nested
with other precision increases, but I suppose it could be.
In most cases where nesting would be profitable, simply
doubling the internal precision once would be much easier.
And the small additive increase often bumps the precision up
to about sqrt(2) times the original...
Some precision doublings are mathematically required. See
Demmel's text for iterative refinement's requirement proof.
Nesting this type of precision change seems quite possible.
It's also a good idea for general expression evaluation,
a quite dead horse.
(Note that some of the uses here are tolerant of 'doubled'
precisions. See IBM's Power series or the aforementioned
XBLAS work. Those have interesting rounding properties, so
I'd relegate them to 'expert' tools.)
The third is almost only used where an exact answer (but not
exact floating-point result) is required. By its nature,
nesting precision changes doesn't make much sense. It also
has a generator-like feel.
All three are normally scoped, or they really want to be.
What does this have to do with Python? Well, does Python
really need fully arbitrary precision changes? Can the
associated state be simplified if you only allow precision
changes that multiply precisions by multiples of 1.5 and 2?
Some thread-local stack with _very_ simple, scoped accessors
should suffice for most (all?) uses. The top of the stack
would be the current working precision. The next entry would
be some i/o precision (routine i/o, not terminal i/o). One
accessor could ensure that the top two entries satisfy some
relationship, pushing a new value if they don't. Another
would just push a multiple of the current top.
[BTW, there's some disagreement about the scope of REXX
experience. Some folks feel that the arbitrary precision
settings are only used in calculations that should be fixed-
point. The "fiddling with it once at the beginning" kinda
supports that view... The REXX and IEEE viewpoints diverge
significantly once you look at subnormals.]
Jason
[1] http://www.markstein.org/
Note: The current good-and-still-fast error for elementary
functions is about 0.53 ulp.
[2] http://www.nersc.gov/~xiaoye/XBLAS/
[3] http://www.cs.cmu.edu/~quake/robust.html
--
More information about the Python-list
mailing list