[Matrix-SIG] An Experiment in code-cleanup.
Thu, 10 Feb 2000 03:12:49 +1100
Konrad Hinsen wrote:
> But back to precision, which is also a popular subject:
but one which even numerical programmers don't seem to
> The upcasting rule thus ensures that
> 1) No precision is lost accidentally. If you multiply a float by
> a double, the float might contain the exact number 2, and thus
> have infinite precision. The language can't know this, so it
> acts conservatively and chooses the "bigger" type.
> 2) No overflow occurs unless it is unavoidable (the range problem).
.. which is all wrong.
It is NOT safe to convert floating point from a lower to a higher
of bits. ALL such conversions should be removed for this reason: any
should have to be explicit.
The reason is that whether a conversion to a larger number
is safe or not is context dependent (and so it should NEVER be done
silently). Consider a function
k0 = 100
k = 99
while k < k0:
k0 = k
k = ...
which refines a calculation until the measure k stops decreasing.
This algorithm may terminate when k is a float, but _fail_ when
k is a double -- the extra precision may cause the algorithm
to perform many useless iterations, in which the precision
of the result is in fact _lost_ due to rounding error.
What is happening is that the real comparison is probably:
k - k0 < epsilon
where epsilon was 0.0 in floating point, and thus omitted.
My point is that throwing away information is what numerical
programming is all about. Numerical programmers need to know
how big numbers are, and how much significance they have,
and optimise calculations accordingly -- sometimes by _using_
the precision of the working types to advantage.
to put this another way, it is generally bad to keep more digits (bits)
or precision than you actually have: it can be misleading.
So a language should not assume that it is OK to add more precision.
It may not be.
John (Max) Skaller, mailto:email@example.com
10/1 Toxteth Rd Glebe NSW 2037 Australia voice: 61-2-9660-0850