[Tutor] int(1.99...99) = 1 and can = 2

eryk sun eryksun at gmail.com
Sun May 1 04:06:09 EDT 2016


On Sun, May 1, 2016 at 1:02 AM, boB Stepp <robertvstepp at gmail.com> wrote:
>
> py3: 1.9999999999999999
> 2.0
> py3: 1.999999999999999
> 1.999999999999999
...
> It has been many years since I did problems in converting decimal to
> binary representation (Shades of two's-complement!), but I am under
> the (apparently mistaken!) impression that in these 0.999...999
> situations that the floating point representation should not go "up"
> in value to the next integer representation.

https://en.wikipedia.org/wiki/Double-precision_floating-point_format

A binary64 float has 52 signifcand bits, with an implicit integer
value of 1, so it's effectively 53 bits. That leaves 11 bits for the
exponent, and 1 bit for the sign.

The 11-bit exponent value is biased by 1023, i.e. 2**0 is stored as
1023. The minimum binary exponent is (1-1023) == -1022, and the
maximum binary exponent is (2046-1023) == 1023. A biased exponent of 0
signifies either signed 0 (mantissa is zero) or a subnormal number
(mantissa is nonzero). A biased exponent of 2047 signifies either
signed infinity (mantissa is zero) or a non-number, i.e. NaN
(mantissia is nonzero).

The largest finite value has all 53 bits set:

    >>> sys.float_info.max
    1.7976931348623157e+308

    >>> sum(Decimal(2**-n) for n in range(53)) * 2**1023
    Decimal('1.797693134862315708145274237E+308')

The smallest finite value has the 52 fractional bits unset:

    >>> sys.float_info.min
    2.2250738585072014e-308

    >>> Decimal(2)**-1022
    Decimal('2.225073858507201383090232717E-308')

The machine epsilon value is 2**-52:

    >>> sys.float_info.epsilon
    2.220446049250313e-16

    >>> Decimal(2)**-52
    Decimal('2.220446049250313080847263336E-16')

Your number is just shy of 2, i.e. implicit 1 plus a 52-bit fractional
value and a binary exponent of 0.

    >>> sum(Decimal(2**-n) for n in range(53))
    Decimal('1.999999999999999777955395075')

The next increment by epsilon jumps to 2.0. The 52-bit mantissa rolls
over to all 0s, and the exponent increments by 1, i.e. (1 + 0.0) *
2**1. Python's float type has a hex() method to let you inspect this:

    >>> (1.9999999999999998).hex()
    '0x1.fffffffffffffp+0'
    >>> (2.0).hex()
    '0x1.0000000000000p+1'

where the integer part is 0x1; the 52-bit mantissa is 13 hexadecimal
digits; and the binary exponent comes after 'p'. You can also parse a
float hex string using float.fromhex():

    >>> float.fromhex('0x1.0000000000000p-1022')
    2.2250738585072014e-308

    >>> float.fromhex('0x1.fffffffffffffp+1023')
    1.7976931348623157e+308


More information about the Tutor mailing list