Python less error-prone than Java

nikie n.estner at gmx.de
Sun Jun 4 08:41:23 EDT 2006


Let's look at two different examples: Consider the following C# code:

static decimal test() {
   decimal x = 10001;
   x /= 100;
   x -= 100;
   return x;
}

It returns "0.01", as you would expect it. Now, consider the python
equivalent:

def test():
    x = 10001
    x /= 100
    x -= 100
    return x

It returns "0". Clearly an error!
Even if you used "from __future__ import division", it would actually
return "0.010000000000005116", which, depending on the context, may
still be an intolerable error.

Morale: the problem isn't whether the the types are chosen at
compile-time or at runtime, it's simply _what_ type is chosen, and
whether it's appropriate or not.

I can even think of an example where C's (and Java's) bounded ints are
the right choice, while Python's arbitraty-precision math isn't: Assume
you get two 32-bit integers containing two time values (or values from
an incremental encoder, or counter values). How do you find out how
many timer ticks (or increments, or counts) have occured between those
two values, and which one was earlier? In C, you can just write:

   long Distance(long t1, long t0) { return t1-t0; }

And all the wraparound cases will be handled correctly (assuming there
have been less than 2^31 timer ticks between these two time values).
"Distance" will return a positive value if t1 was measured after t0, a
negative value otherwise, even if there's been a wraparound in between.
Try the same in Python and tell me which version is simpler!




More information about the Python-list mailing list