[Python-ideas] Python Numbers as Human Concept Decimal System

Ron Adam ron3200 at gmail.com
Sat Mar 8 20:47:43 CET 2014



On 03/08/2014 12:43 PM, Andrew Barnert wrote:
> From: Ron Adam <ron3200 at gmail.com>
>
> Sent: Saturday, March 8, 2014 10:10 AM
>
>> On 03/08/2014 11:43 AM, Chris Angelico wrote:
>>>   On Sun, Mar 9, 2014 at 4:35 AM, Ron Adam<ron3200 at gmail.com>  wrote:
>>
>>>>   >A repr should give the exact value its object if it's suppose
>> to be a
>>>>   >machine readable version of it.  (As numbers __repr__ should do.)

>>>   As I understand it, float.__repr__ does indeed give the exact value,
>>>   in terms of reconstructing the float.

>> What I'm thinking about is...
 >>
>> If floats repr is changed to disregard exta digits, will this still be
>> true?  How is float to know what exta digits should be disregarded?

> No one is suggesting such a change, and it would be shot down if anyone did.

Glad to hear it! :-)

> The old repr and str for float used to discard (different numbers of)
> digits. The current version does not. Instead, if picks the shortest
> string that would evaluate back to the same value if passed to the float
> constructor (or to eval).
>
> So, repr(0.100000000000000006) == '0.1', not because repr is discarding
> digits, but because 0.100000000000000006 == 0.1 (because the closest
> binary IEEE double value to both is
> 0.1000000000000000055511151231257827021181583404541015625).


>>> There are infinitely many float literals that will result in the
>>> exact
>>>
>>> same bit pattern, so any of them is valid for repr(n) to return.

>> When are float literals actually converted with floats.  It seems to
>> me, that the decimal functions can be used to get the closest one.
>> Then they will be consistent with each other. (if that isn't already
>> being done.)

> Float literals are converted to floats at compile time. When the
> compiler sees 0.1, or 0.100000000000000006, it works out the nearest
> IEEE double to that literal and stores that double.

> So, by the time any Decimal function sees the float, there's no way to
> tell whether it was constructed from the literal 0.1, the literal
> 0.100000000000000006, or some long chain of transcendental functions
> whose result happened to have a result within 1 ulp of 0.1.
>
> The current behavior guarantees that, for any float, float(Decimal(f))
> == float(repr(Decimal(f))) == f. Guido's proposal would preserve that
> guarantee. If that's all you care about, nothing would change. Guido is
> just suggesting that, instead of using the middle Decimal from the
> infinite set of Decimals that would make that true, we use the shortest
> one.

Sounds good to me..

Can that effect rounding where a value may round down instead of up? or 
vice versa.  If so, it should be the shortest string that does not cross 
the mid point of the two closest floats.  (I think I got that right.)

Not sure where I read it last night, but there was a mention that only a 
few languages do the this conversion with less than .5 Ulps error.  But it 
seems to me it might be more important to not to error in the wrong 
direction if there is a choice.


Well I'll leave it up you guys,  But it's a very interesting topic for sure.

Cheers,
    Ron












More information about the Python-ideas mailing list