A use for integer quotients
steve at lurking.demon.co.uk
Mon Jul 23 23:34:07 CEST 2001
On Mon, 23 Jul 2001 12:47:04 -0700, David Eppstein
<eppstein at ics.uci.edu> wrote:
>In article <3B5C749A.59F97320 at tundraware.com>,
> Tim Daneliuk <tundra at tundraware.com> wrote:
>> Not formally, AFAIK. For example, 3 with no decimal point following
>> could be anything from 2.5 to 3.4.
>No no no. '3.' i.e. a floating point number with that value could be
>anything in that range. '3' means exactly the integer 3, no approximation
A mathematician is free to use '3' to represent a real - the fact that
it is real is implicit in the context. Compilers and interpeters
aren't able to determine that, so integers and floats have different
notations even when the represent equivalent values.
We can't exactly match mathematics here, but we don't have to go so
far as having different symbols for integer and float division -
mathematics uses the same set of symbols for both operations allowing
the context choose the operator. Why should we behave differently to
what non-programmers such as mathematicians expect?
More information about the Python-list