is int(round(val)) safe?
bokr at oz.net
Tue Nov 23 20:33:58 CET 2004
On Tue, 23 Nov 2004 10:50:23 -0600, Mike Meyer <mwm at mired.org> wrote:
>bokr at oz.net (Bengt Richter) writes:
>> On Mon, 22 Nov 2004 15:58:54 -0500, Peter Hansen <peter at engcorp.com> wrote:
>>>Russell E. Owen wrote:
>>>The problem* with floating point is inaccurate representation
>>>of certain _fractional_ values, not integer values.
>> Well, you mentioned really large integers, and I think it's worth
>> mentioning that you can get inaccurate representation of certain of those
>> values too. I.e., what you really have (for ieee 754 doubles) is 53 bits
>> to count with in steps of one weighted unit, and the unit can be 2**0
>> or 2**otherpower, where otherpower has 11 bits to represent it, more or less
>> +- 2**10 with an offset for 53. If the unit step is 2**1, you get twice the range
>> of integers, counting by two's, which doesn't give you a way of representing the
>> odd numbers between accurately. So it's not only fractional values that can get
>> truncated on the right. Try adding 1.0 to 2.0**53 ;-)
>It's much easier than that to get integer floating point numbers that
>aren't correct. Consider:
Yes. I was just trying to identify the exact point where you lose 1.0 granularity.
The last number, with all ones in the available significant bits (including the hidden one)
>>> from ut.miscutil import prb
the last power of 10 that is accurate is 22, and the reason is plain
when you look at the bits:
10**22 has more than 53 bits but only zeroes to the right of the 53, but 10**23
has a bit to the right.
whereas for 23 1e23 != 10**23
The zeroes are eliminated if you use a power of 10/2 or 5, which is always odd
>>> prb( 5**22)
>>> prb( 5**23)
Or in decimal terms:
so what makes 5.**22 ok is that
>>> 5.**22 <= 2.**53-1
>I don't know the details on 754 FP, but the FP I'm used to represents
>*all* numbers as a binary fraction times an exponent. Since .1 can't
>be represented exactly, 1e<anything> will be wrong if you ask for
I don't understand the "since .1 ..." logic, but I agree with the second
part. Re *all* numbers, if you multiply the fraction represented by the
53 fractional bits of any number by 2**53 you get an integer that you can
consider to be multiplied by 2 ** (the exponent for the fraction - 53),
which doesn't change anything, so I did that, so I could talk about
counting by increments of 1 unit of least precision. But yes, the
usual description is as a fraction times a power of two.
>This recently caused someone to propose that 1e70 should be a long
>instead of a float. No one mentioned the idea of making
>[0-9]+[eE]+?[0-9]+ be of integer type, and
>[0-9]*.[0-9]+[eE][+-]?[0-9]+ be a float. [0-9]+[eE]-[0-9]+ would also
>be a float. No simple rule for this, unfortunately.
I wrote a little exact decimal module based on keeping decimal exponents and
a rational numerator/denominator pair, which allows keeping an exact representation
of any reasonable (that you might feel like typing ;-) literal, like 1e70, etc., e.g.,
(1, 1, 70)
(123, 1, -47)
(1, 1, -1)
The reason I mention this is not because I think all floating constants should be represented
this way in final code, but that maybe they should in the compiler ast, before code has been
generated. At that point, it seems a shame to have done a premature lossy conversion to platform
floating point, since one might want to take the AST and generate code with other representations.
>>> import compiler
Module(None, Stmt([Assign([AssName('a', 'OP_ASSIGN')], Const(0.10000000000000001))]))
>>> from ut.exactdec import ED
(1, 1, -1)
vs what's represented by the actual floating point bits
(1000000000000000055511151231257827021181583404541015625L, 1L, -55)
Anyway, tuple is an easy exact possibility for intermediate representation of the number.
Of course, I guess you'd have to tag it as being from a floating point literal or else a code
generator would lose that implicit representation directive for ordinary code generation ...
More information about the Python-list