Hendrik van Rooyen
mail at microcorp.co.za
Sat Jan 13 07:20:38 CET 2007
"Nick Maclaren" <nmm1 at cus.cam.ac.uk> wrote:
> In article <mailman.2632.1168583141.32031.python-list at python.org>,
> "Hendrik van Rooyen" <mail at microcorp.co.za> writes:
> |> I would have thought that this sort of thing was a natural consequence
> |> of rounding errors - if I round (or worse truncate) a binary, I can be off
> |> by at most one, with an expectation of a half of a least significant digit,
> |> while if I use hex digits, my expectation is around eight, and for decimal
> |> around five...
> |> So it would seem natural that errors would propagate
> |> faster on big base systems, AOTBE, but this may be
> |> a naive view..
> Yes, indeed, and that is precisely why the "we must use binary" camp won
> out. The problem was that computers of the early 1970s were not quite
> powerful enough to run real applications with simulated floating-point
> arithmetic. I am one of the half-dozen people who did ANY actual tests
> on real numerical code, but there may have been some work since!
*grin* - I was around at that time, and some of the inappropriate habits
almost forced by the lack of processing power still linger in my mind,
like - "Don't use division if you can possibly avoid it, - its EXPENSIVE!"
- it seems so silly nowadays.
> Nowadays, it would be easy, and it would make quite a good PhD. The
> points to look at would be the base and the rounding rules (including
> IEEE rounding versus probabilistic versus last bit forced[*]). We know
> that the use or not of denormalised numbers and the exact details of
> true rounding make essentially no difference.
> In a world ruled by reason rather than spin, this investigation
> would have been done before claiming that decimal floating-point is an
> adequate replacement for binary for numerical work, but we don't live
> in such a world. No matter. Almost everyone in the area agrees that
> decimal floating-point isn't MUCH worse than binary, from a numerical
> point of view :-)
As an old slide rule user - I can agree with this - if you know the order
of the answer, and maybe two points after the decimal, it will tell you
if the bridge will fall down or not. Having an additional fifty decimal
places of accuracy does not really add any real information in these
cases. Its nice of course if its free, like it has almost become - but
I think people get mesmerized by the numbers, without giving any
thought to what they mean - which is probably why we often see
threads complaining about the "error" in the fifteenth decimal place..
> [*] Assuming signed magnitude, calculate the answer truncated towards
> zero but keep track of whether it is exact. If not, force the last
> bit to 1. An old, cheap approximation to rounding.
This is not so cheap - its good solid reasoning in my book -
after all, "something" is a lot more than "nothing" and should
not be thrown away...
More information about the Python-list