[Slightly OT]: More on ints and floats

Tim Daneliuk tundra at tundraware.com
Mon Apr 7 18:03:53 EDT 2003


OK, I don't want to resurrect another interminable discourse in ints,
floats, and how they are/ought/can be handled in the language. But, I
have a sort of theoretical mathematical question which the whole
business brought to mind.

As I understand it, integers and floats are distinct mathematical
entities. A colleague of mine claims, that insfar as we use them in
computing, ints are merely a proper subset of floats. He furthermore
asserts that (again as regards to computing) the distinction between
them was made as a purely as a practical matter because floating point
arithmetic was historically computationally expensive. He argues that
any place one can use an int, these days (with cheap FP hardware), one
could use a float 0 extended to the precision of the machine and get
equivalent computational results. Say the hardware supported 4 digits of
precision. He is arguing that:


    3/4.0000  is equivalent to 3.0000/4.0000

(Never mind the old Python modulo vs. divison debate.)

In effect he is saying that, unless there is a practical
performance/cost issue at hand, there is no real reason to
differentiate between ints and floats in practical programming
problems.

As a matter of 'pure' mathematics, I argued that ints and floats are
very different critters. My argument (which is no doubt formally very
weak) is that the integer 3 and the float 3.0000 are different because
of the precision problem.  For instance, the integer 3 exists at a single
invariant point on the number line, but 3.0000 represents all numbers
from 2.99995 through 3.00004.

Could one of the genius mathematicians here bring some clarity to this
discussion, please?

TIA,
-- 
----------------------------------------------------------------------------
Tim Daneliuk     tundra at tundraware.com
PGP Key:         http://www.tundraware.com/PGP/TundraWare.PGP.Keys.txt





More information about the Python-list mailing list