On Sat, 08 Mar 2014 13:59:32 -0800 Ethan Furman email@example.com wrote:
So, how is this justified?
Python 3.4.0b3+ (default:aab7258a31d3, Feb 7 2014, 10:48:46) [GCC 4.7.3] on linux Type "help", "copyright", "credits" or "license" for more information. --> from decimal import Decimal as D --> 9017.0109812864350067128347806 9017.010981286436 --> D(9017.0109812864350067128347806) Decimal('9017.01098128643570817075669765472412109375')
In case my question isn't obvious, the direct float got me 16 digits, while the Decimal float got me 42 digits. Why is the Decimal more "accurate" that the float it came from?
Both representations need to uphold the round-tripping property:
float(str(some_float)) == some_float Decimal(str(some_decimal)) == some_decimal
However, since Decimal has arbitrary precision (as opposed to a fixed number of bits), the number of digits needed in the repr() can be much higher in order to uphold the round-tripping property:
float('1.1') == 1.1 True decimal.Decimal('1.1') == decimal.Decimal(1.1) False
Moreover, the Decimal constructor strives to uphold the property that Decimal(some_number) == some_number (using the required internal precision).
Therefore, decimal.Decimal(1.1) is the exact decimal value of the binary floating-point number best approaching the decimal literal '1.1':
decimal.Decimal(1.1) Decimal('1.100000000000000088817841970012523233890533447265625') decimal.Decimal('1.100000000000000088817841970012523233890533447265625') == 1.1 True
Indeed, the difference is less than a binary floating-point mantissa's resolution:
decimal.Decimal(1.1) - decimal.Decimal('1.1') Decimal('8.881784197001252323389053345E-17') 2**(-sys.float_info.mant_dig) 1.1102230246251565e-16
However, since Decimal has such a high potential resolution, changing even one faraway digit will break the equality:
decimal.Decimal('1.100000000000000088817841970012523233890533447265626') == 1.1 False
(I changed the trailing '5' to a '6')
Which is why Decimal(1.1) has to be so precise, and so has its repr() too.
Of course, if you convert those literals to floats, they turn out to convert to the same binary floating-point number:
1.1 == 1.100000000000000088817841970012523233890533447265625 True
float() can therefore afford to have a "user-friendly" repr() in some cases where Decimal can't.