[Tutor] Re: If elif not working in comparison

C Smith smichr at hotmail.com
Tue Mar 29 12:06:59 CEST 2005

 > gerardo arnaez wrote:
 >> On Mon, 28 Mar 2005 09:27:11 -0500, orbitz <orbitz at 
 >> wrote:
 >>> Floats are inherintly inprecise.  So if thigns arn't working like 
 >>> expect don't be surprised if 0.15, 0.12, and 0.1 are closer to the 
 >>> number than you think.
 >> Are you telling me that I cant expect 2 digit preceision?
 > Not without using round. Have *NO* faith in floating points. This is
 > especially true when you are creating the decimals via division and 
 > like.

What makes one nervous when looking at your tests with floating point 
numbers is that calcuations that might lead up to one of the numbers in 
your test might actually be a tiny bit higher or a tiny bit lower than 
you expect.  *However*, I think the thing to keep in mind is that if 
you are only a billionth or less away from the boundary, it really 
doesn't matter which way you go in terms of dosage--you are at the 

Here is an example of two calculations that you think should get you to 
2.1. One is actually just shy of it and the other just a bit past:

 >>> 3*.7
 >>> 7*.3

So if you have the following test to determine if it is time to 
increase someone's dose of medication and the results are based on some 
measurement being greater than 2.1, then...

 >>> def test(x):
  if x>2.1:
   print 'increase dose'
   print 'decrease dose'


if you test the above computed values, you will get different results:

 >>> dose1, dose2 = 3*.7, 7*.3
 >>> test(dose1)
decrease dose
 >>> test(dose2)
decrease dose

OR NOT! Why do they both come out to be the same even though one number 
is bigger than 2.1 and the other not? Because 2.1 is not actually '2.1':

 >>> 2.1

The 7*.3 is actually "equal" to 2.1: they both are 2.1000000000000001.

On the one hand, we want the boundaries so we can say, "if you are past 
this boundary you take the next sized dose"...but if you are so close 
to the boundary that you can just as well say that you are right on it, 
does it really matter which way you go? Other than bothering the 
deterministic nature, are there other reasons to be concerned? As far 
as economics, the '2,1' case will have people staying at a lower dose 
longer (as would 2.2, 2.6 and 2.7 which all are a tiny bit bigger than 
their mathematical values, just like 2.1).  Just the opposite will 
happen at the 2.3, 2.4, 2.8, and 2.9 since these values are represented 
as numbers smaller than their mathematical counterparts:

 >>> 2.9

If you have Python 2.4 you might want to check out the decimal type 
that is now part of the language.  There is a description at


which starts with:

"Python has always supported floating-point (FP) numbers, based on the 
underlying C double type, as a data type.  However, while most 
programming languages provide a floating-point type, many people (even 
programmers) are unaware that floating-point numbers don't represent 
certain decimal fractions accurately.  The new Decimal type can 
represent these fractions accurately, up to a user-specified precision 
limit.  "

And it gives the following example:

Once you have Decimal instances, you can perform the usual mathematical 
operations on them.  One limitation: exponentiation requires an integer 

 >>> import decimal
 >>> a = decimal.Decimal('35.72')
 >>> b = decimal.Decimal('1.73')
 >>> a+b
 >>> a-b
 >>> a*b
 >>> a/b

Another suggestion is to use the Round replacement that I wrote about 
in response to the "Float precision untrustworthy~~~" thread. With it, 
you don't have to worry about how large or small your numbers are when 
you Round them (unlike with round()). You just decide on how many 
"parts per x" you are interested in with your numbers.  e.g. if your 
calculations are only precise to about 1 part per hundred then you 
would round them to the 2nd (or maybe 3rd) digit and compare them.  
With the separation of code and data as already suggested this would 
look something like this:

def Round(x, n=0, sigfigs=False):
	if x==0: return 0
	if not sigfigs:
		return round(x,n)
		#round to nth digit counting from first non-zero digit
		# e.g. if n=3 and x = 0.002356 the result is 0.00236
		return round(x,n-1-int(floor(log10(abs(x)))))
      for cutoff, value in cutoffs:
          if Round(x,3) < Round(cutoff,3):
              return value

IMHO, I don't think that decimal is the way to go with such 
calculations since calculations performed with limited precision 
numbers shouldn't be treated as though they had infinite precision. 
 From basic error analysis of the calculated values above, one shouldn't 
probably keep more than 3 digits in a/b result of 
Decimal("20.64739884393063583815028902").  On the other hand, perhaps 
one can keep all the digits until the end and then make the precision 
decision--this gives the best of both worlds.


More information about the Tutor mailing list