On 03/08/2014 12:32 PM, Oscar Benjamin wrote:
On 8 March 2014 18:59, Guido van Rossum wrote: >
But I still have this nagging feeling that the precision Decimal(<float>) currently gives you is, in a sense, fake, given that the input has much less precision.
That depends what you use it for. My most common reason for converting a float to a Decimal is to test the accuracy of a float-based calculation by comparing it against the corresponding much higher-precision decimal calculation e.g.:
with localcontext() as ctx: ctx.prec = 100 error = f(D(x)) - D(f(x))
For this I want the constructor to give me the exact value of the float x.
I am not a mathematician, and it's been a long time since I took physics, but I seem to remember that a lot of importance was placed on significant digits.
So, how is this justified?
Python 3.4.0b3+ (default:aab7258a31d3, Feb 7 2014, 10:48:46) [GCC 4.7.3] on linux Type "help", "copyright", "credits" or "license" for more information. --> from decimal import Decimal as D --> 9017.0109812864350067128347806 9017.010981286436 --> D(9017.0109812864350067128347806) Decimal('9017.01098128643570817075669765472412109375')
In case my question isn't obvious, the direct float got me 16 digits, while the Decimal float got me 42 digits. Why is the Decimal more "accurate" that the float it came from?