[Baypiggies] rounding off problem
jimmy at retzlaff.com
Wed Mar 23 19:56:38 CET 2011
On Wed, Mar 23, 2011 at 11:25 AM, Tung Wai Yip <tungwaiyip at yahoo.com> wrote:
> The floating point representation of decimals also troubles me a lot. I
> often output data in json format. The output is unsightly and in many case
> just waste network bandwidth and storage.
> json.dumps([1.0, 1.1, 10.1, 100.1])
>>> '[1.0, 1.1000000000000001, 10.1, 100.09999999999999]'
> If print can clean it up, why isn't it applied uniformly in other
> Wai Yip
print really doesn't clean this up. It glosses over an inherent problem -
translating between binary and decimal floating point is inherently inexact.
It's roughly equivalent to the problem of trying to express 1/3 as a
decimal. What's better, 0.3 or 0.333 or 0.33333333333333? All are inexact.
0.333 is "good enough" in some situations, but not others - the point is
that you have to choose what is best in each situation.
hardware accelerated support for floating point (i.e., binary floating
point), then it isn't outputting the value that is truly in memory as
it write out 100.09999999999999 - it will give 100.1. But what if you really
wanted the more exact number? When translating from decimal to binary
floating point, you end up with the exact same representation in memory for
both of those numbers (and it's much closer to the long, ugly number).
Language implementors have to make a choice whether to show things as
accurately as possible (akin to 0.33333333333333) or assume you want the
closest "simple" decimal representation (akin to 0.3). Python's implementors
chose more accuracy for serialization like formats (repr, json, etc.) and
simpler for human output (print, string interpolation of floats, etc.).
A pretty good understanding of all this is just one web page away. :)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Baypiggies