[Tutor] range function and floats?

Wayne Werner waynejwerner at gmail.com
Wed Jan 5 20:30:59 CET 2011


On Wed, Jan 5, 2011 at 10:43 AM, Steven D'Aprano <steve at pearwood.info>wrote:

> Wayne Werner wrote:
>
>  The decimal module allows you to get rid of those pesky floating point
>> errors. See http://docs.python.org/library/decimal.html for more info.
>>
>
> That's a myth. Decimal suffers from the same floating point issues as
> binary floats. It's quite easy to demonstrate the same sort of rounding
> errors with Decimal as for float:
>
> >>> from decimal import Decimal as D
> >>> x = D(1)/D(3)
> >>> 3*x == 1
> False
>

I should have clarified - when I say pesky floating point errors I mean
errors in precision that you naturally would not expect.

1/3 == .333 (repeating forever(?)). But on a computer, you're limited to a
specific precision point (sort of), and the moment you truncate
.333(repeating) to *any* finite points of precision you no longer have the
result of the mathematical operation 1/3. Yes, 1/3*3 == 1, but the error in
the Decimal module is *only* in division. It might be useful to define a
"repeating" flag in the Decimal module for the results of such operations as
1/3, which would get rid of the error in truncation. But this is a
fundamentally different error from the standard floating point errors.

In [106]: 0.9
Out[106]: 0.90000000000000002

These are two different numbers, mathematically. If I say x = 0.9, I
naturally assume that 0.9 is the value in x, not 0.90000000000000002. Of
course it's all in the implementation of floating points, and the fact that
Python evaluates 0.9 in a floating point context which results in the stored
value, but that ignores the fact that *naturally* one does not expect this.
And anyone who's been through 2nd or 3rd grade or whenever they teach about
equality would expect that this would evaluate to False.

In [112]: 0.90000000000000002 == 0.9
Out[112]: True

You don't get such silliness with the Decimal module:

In [125]: D('0.90000000000000002') == D('0.9')
Out[125]: False


> Between 1 and 1000 inclusive, there are 354 such numbers:
>
> >>> nums = []
> >>> for i in range(1, 1001):
> ...     x = D(1)/D(i)
> ...     if x*i != 1: nums.append(i)
> ...
> >>> len(nums)
> 354
>
>
>
> The problem isn't just division and multiplication, nor does it just affect
> fractional numbers:
>
> >>> x = D(10)**30
> >>> x + 100 == x
> True
>
>
Decimal DOES get rid of floating point errors, except in the case of
repeating (or prohibitively large precision values)

In [127]: x = D(10)**30

In [128]: x
Out[128]: Decimal('1.000000000000000000000000000E+30')

In [129]: x + 100
Out[129]: Decimal('1.000000000000000000000000000E+30')

If you reset the precision to an incredibly large number:
decimal.getcontext().prec = 1000

In [131]: x = D(10)**30

In [132]: x
Out[132]: Decimal('1000000000000000000000000000000')

In [133]: x + 100 == x
Out[133]: False

Voila, the error has vanished!


>
> So it simply isn't correct to suggest that Decimal doesn't suffer from
> rounding error.


I never said rounding errors - I said "pesky floating point errors". When
performing the operation 1/3, I naturally expect that my computer won't hold
each of the 3's after the decimal point, and I don't categorize that as
pesky - that's just good sense if you know a little about computers. I also
expect that .333 * 3 would give me the number .999, and only .999, not
.9990000000011 or some other wonky value. Of course it's interesting to note
that Python handles the precision properly when dealing with strings, but
not with the floating points themselves (at least on this particular trial):

In [141]: .333 * 3
Out[141]: 0.99900000000000011

In [142]: str(.333*3)
Out[142]: '0.999'

In [143]: .333 * 3 == .999
Out[143]: False

In [144]: str(.333*3) == str(.999)
Out[144]: True


Decimal and float share more things in common than differences. Both are
> floating point numbers. Both have a fixed precision (although you can
> configure what that precision is for Decimal, but not float). Both use a
> finite number of bits, and therefore have a finite resolution. The only
> differences are:
>
> * as mentioned, you can configure Decimal to use more bits and higher
> precision, at the cost of speed and memory;
> * floats use base 2 and Decimal uses base 10;
> * floats are done in hardware and so are much faster than Decimal;
>

Precisely. It's not a magic bullet (1/3 != .33333333 mathematically, after
all!), *but* it eliminates the errors that you wouldn't normally expect when
working with "standard" points of precision, such as the expectation that
0.333 * 3 resulting in .999, not .99900000000011.

Hopefully a little more precise,
Wayne
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/tutor/attachments/20110105/038dde65/attachment.html>


More information about the Tutor mailing list