[Numpy-discussion] Re: [Matrix-SIG] An Experiment in code-cleanup.

Andrew P. Mullhaupt amullhau at zen-pharaohs.com
Wed Feb 9 12:21:37 EST 2000


> Konrad Hinsen wrote:
>
> > But back to precision, which is also a popular subject:
>
> but one which even numerical programmers don't seem to
> understand ...

Some do, some don't.


> It is NOT safe to convert floating point from a lower to a higher
> number
> of bits.

It is usually safe. Extremely safe. Safe enough that code in which it is
_not_ safe is badly designed.

> ALL such conversions should be removed for this reason: any
> conversions should have to be explicit.

I really hope not. A generic function with six different arguments becomes
an interesting object in a language without automatic conversions. Usually,
a little table driven piece of code has to cast the arguments into
conformance, and then multiple versions of similar code are applied.


> which refines a calculation until the measure k stops decreasing.
> This algorithm may terminate when k is a float, but _fail_ when
> k is a double  -- the extra precision may cause the algorithm
> to perform many useless iterations, in which the precision
> of the result is in fact _lost_ due to rounding error.

This is a classic bad programming practice and _it_ is what should be
eliminated. It is a good, (and required, if you work for me), practice that:

1. All iterations should have termination conditions which are correct; that
is, prevent extra iterations. This is typically precision sensitive. But
that is simply something that has to be taken into account when writing the
termination condition.

2. All iterations should be protected against an unexpectedly large number
of iterates taking place.

There are examples of iterations which are intrinsically stable in lower
precision and not in higher precision (Brun's algorithm) but those are quite
rare in practice. (Note that the Fergueson-Forcade algorithm, as implemented
by Lenstra, Odlyzko, and others, has completely supplanted any need to use
Brun's algorithm as well.)

When an algorithm converges because of lack of precision, it is because the
rounding error regularizes the problem. This is normally referred to in the
trade as "idiot regularization". It is in my experience, invariably better
to actually choose a regularization that is specific to the computation than
to rely on rounding effects which might be different from machine to
machine.

In particular, your type of example is in for serious programmer enjoyment
hours on Intel or AMD machines, which have 80 bit wide registers for all the
floating point arithmetic.

Supporting needless machine dependency is not something to argue for,
either, since the Cray style floating point arithmetic has a bad error
model. Even Cray has been beaten into submission on this, finally releasing
IEEE compliant processors, but only just recently.

> to put this another way, it is generally bad to keep more digits (bits)
> or precision than you actually have

I couldn't agree less.

The exponential function and inner product accumulation are famous examples
of why extra bits are important in intermediate computations. It's almost
impossible to have an accurate exponential function without using extra
precision - which is one reason why so many machines have extra bits in
their FPUs and there is an IEEE "extended" precision type.

The storage history effects which result from temporarily increased
precision are well understood, mild in that they violate no common error
models used in numerical analysis.

And for those few cases where testing for equality is needed for debugging
purposes, many systems permit you to impose truncation and eliminate storage
history issues.

Later,
Andrew Mullhaupt





More information about the NumPy-Discussion mailing list