Comment on PEP-0238

Tim Peters at
Mon Jul 9 00:19:01 CEST 2001

[Guido, to Emile van Sebille]
> Your examples don't show what should happen in this example:
>   precision 0
>   x = 2/7
>   precision 4
>   print x
> In other words, is the precision a property of the printing or a
> property of the value stored?
> There are lots of other ways to get this effect for printing ("%.4f"%x
> comes to mind); if you intended the effect to apply to computation
> results, there are questions about the scope of the calculation.  If I
> write
>   precision 3
>   x = f()	# some function that uses int division
> should every calculation in f() be affected by the precision
> statement?
> Etc., etc.

IBM's Standard Decimal Arithmetic proposal answers such questions, but
abstractly (in terms of the numeric *model*, not in terms of concrete
language syntax):

Fully usable numerics require consulting and updating several kinds of
"global" (really thread-local) state.  The IBM proposal formalizes this as
"the context" component of every numeric computation.  This is akin to 754's
mandatory set of rounding mode and trap-enable flags (implicit inputs to
every fp operation), and sticky "did such-and-such happen?" exception flags
(implicit outputs from every fp operation); so you don't get away from this
in the end even if "all you want" is supporting platform HW fp correctly.
BTW, state-of-the-art rational support also requires context, to determine
whether results should be reduced to lowest terms (some programs run 1000s
of times faster if you don't, others 1000s of times faster if you do, and
only the algorithm author can know which is better).

IBM's proposal has similar stuff, and also adds a precision component (an
int > 0) to context.  The rules for how precision affects operations are
spelled out (BTW, Pentium HW also has a hidden "precision control" implicit
input to every fp operation, determining whether the result is rounded to
24, 53 or 64 significant bits; the lack of language support for dealing with
that in C may well be the #1 cause for numeric discrepancies across C
programs run on 754 boxes under different compilers and C libraries).

A gloss on this:

> If I write
>   precision 3
>   x = f()	# some function that uses int division
> should every calculation in f() be affected by the precision
> statement?

Believe it or not, probably "yes".  REXX has tons of experience with this,
and life works best if (a) everyone respects precision, and (b) users rarely
fiddle it except once at the start of a program run.

That said, f()'s author also needs to make a choice here.  Most *library*
routines should strive to return a *result* good to strictly less than 1 ULP
wrt the precision in effect at the time they were called, even if that
requires temporarily boosting precision internally.  This kind of construct
would be common in well-written libraries:

    old = context.getprecision()
        context.setprecision(max(8, oldp + 3))
        do stuff
        return roundtoprecision(result, oldp)
        # Restore starting precision.

This is reminiscent of:

        do stuff

and more generally of C++ code using:

        MySideEffectObjectClass x(args); // never referenced
        do stuff;
        // No matter how we leave this block, it's guaranteed that
        // x's destructor gets invoked before we're out.

where the constructor allocates a resource (or fiddles a context), and the
destructor releases it (or restores the context).

The last time we solved a "it's *hard* to make sure temporary context
changes get undone!" problem in Python was via adding ">>" to "print" <0.8
wink>.  Python may want a more general approach to this kind of thing.

More information about the Python-list mailing list