Hendrik van Rooyen
mail at microcorp.co.za
Fri Jan 12 07:12:03 CET 2007
"Nick Maclaren" <nmm1 at cus.cam.ac.uk> wrote:
> Yes, but that wasn't their point. It was that in (say) iterative
> algorithms, the error builds up by a factor of the base at every step.
> If it wasn't for the fact that errors build up, almost all programs
> could ignore numerical analysis and still get reliable answers!
> Actually, my (limited) investigations indicated that such an error
> build-up was extremely rare - I could achieve it only in VERY artificial
> programs. But I did find that the errors built up faster for higher
> bases, so that a reasonable rule of thumb is that 28 digits with a decimal
> base was comparable to (say) 80 bits with a binary base.
I would have thought that this sort of thing was a natural consequence
of rounding errors - if I round (or worse truncate) a binary, I can be off
by at most one, with an expectation of a half of a least significant digit,
while if I use hex digits, my expectation is around eight, and for decimal
So it would seem natural that errors would propagate
faster on big base systems, AOTBE, but this may be
a naive view..
More information about the Python-list