Python less error-prone than Java

Christoph Zwerschke cito at online.de
Sun Jun 4 14:25:31 EDT 2006


nikie wrote:
 > Let's look at two different examples: Consider the following C# code:
 >
 > static decimal test() {
 >    decimal x = 10001;
 >    x /= 100;
 >    x -= 100;
 >    return x;
 >
 > It returns "0.01", as you would expect it.

Yes, I would expect that because I have defined x as decimal, not int.

 > Now, consider the python equivalent:
 >
 > def test():
 >     x = 10001
 >     x /= 100
 >     x -= 100
 >     return x

No, that's not the Python equivalent. The equivalent of the line

decimal x = 10001

in Python would be

x = 10001.0

or even:

from decimal import Decimal
x = Decimal(10001)

Setting x = 10001 would be equivalent to the C# code

int x = 10001

 > It returns "0". Clearly an error!

That's not clearly an error. If you set int x = 10001 in C#, then you 
also get a "0". By setting x to be an integer, you are implicitely 
telling Python that you are not interested in fractions, and Python does 
what you want. Granted, this is arguable and will be changed in the 
__future__, but I would not call that an error.

By the way, the equivalent Python code to your C# program gives on my 
machine the very same result:
 >>> x = 10001.0; x /= 100; x -= 100; print x
0.01

 > Even if you used "from __future__ import division", it would actually
 > return "0.010000000000005116", which, depending on the context, may
 > still be an intolerable error.

With from __future__ import division, I also get 0.01 printed. Anyway, 
if there are small discrepancies then these have nothing to do with 
Python but rather with the underlying floating-point hardware and C 
library, the way how you print the value and the fact that 0.01 can 
principally not be stored exactly as a float (nor as a C# decimal), only 
as a Python Decimal.

 > I can even think of an example where C's (and Java's) bounded ints are
 > the right choice, while Python's arbitraty-precision math isn't:
 > Assume you get two 32-bit integers containing two time values (or
 > values from an incremental encoder, or counter values). How do you
 > find out how many timer ticks (or increments, or counts) have occured
 > between those two values, and which one was earlier? In C, you can
 > just write:
 >
 >    long Distance(long t1, long t0) { return t1-t0; }
 >
 > And all the wraparound cases will be handled correctly (assuming there
 > have been less than 2^31 timer ticks between these two time values).
 > "Distance" will return a positive value if t1 was measured after t0, a
 > negative value otherwise, even if there's been a wraparound in
 > between. Try the same in Python and tell me which version is simpler!

First of all, the whole problem only arises because you are using a 
statically typed counter ;-) And it only is easy in C when your counter 
has 32 bits. But what about a 24 bit counter?

Anyway, in Python, you would first define:

def wrap(x, at=1<<31):
     if x < -at:
         x += at*2
     elif x >= at:
         x -= at*2
     return x

Then, the Python program would be as simple:

Distance = lambda t1,t0: wrap(t1-t0)

-- Christoph



More information about the Python-list mailing list