high precision mathematics

Paul Rubin phr-n2002a at nightsong.com
Sun Feb 17 08:45:51 CET 2002

Tim Roberts <timr at probo.com> writes:
> And the original poster should understand that this is true because such
> systems are of little practical use.  There are very, very few physical
> processes where floating point is useful AND precision of more than 6
> significant digits is really required.  Large integers can be useful (and
> Python supports them), but a requirement for high precision floats is
> usually a sign that the requirer does not understand his problem space.

If that were really true, computers wouldn't bother implementing 
double and extended precision.  Those features are there for a reason,
which is that they are needed.

If you try to (say) solve a near-singular matrix with single precision
and a numerically unstable algorithm, your answers can come out
totally bogus (all digits incorrect) even if the inputs started with 6
accurate significant figures.  You can then either spend years of your
life studying numerical analysis so you can design algorithms with
ultra-careful error control and get good answers with single
precision, or you can just run your program with double or extended
precision arithmetic and get good answers right away by just burning a
little more computer time.

If all you want is to get good answers, it's often easier to crank up
the precision and let the machine do the work.

See for example 


which mostly is a rant against Java for not implementing IEEE extended
precision arithmetic, but discusses the precision issue from a number
of perspectives.

More information about the Python-list mailing list