[Python-ideas] Python Numbers as Human Concept Decimal System

Ron Adam ron3200 at gmail.com
Sat Mar 8 15:05:48 CET 2014



On 03/07/2014 01:01 PM, Mark H. Harris wrote:
> The point here is that binary floats should not be promoted to decimal
> floats by using "exact" copy from binary float representation to decimal
> float representation.

In the case of writing out a number.  Decimal(2.1),  A decimal literal 
solves that case since no floats are involved.  It would be the same as the 
exact copy, but that is perfectly expected as well.  (and already works 
when a string is used in the constructor.


Regarding not raising an error when a float is used directly in the decimal 
constructor:  As wrong as it may seem at first, also consider python does 
not raise an error for this.

    >>> int(.1)
    0

The users is expected to know that the result will not always equal to what 
is given.  It's just really obvious in this case because, this is what this 
function has always done, and what we expect it to do.

The decimal case isn't that different except that it isn't so obvious. 
What that means is the docs needs be adjusted to make it more easily 
findable...  But currently...



Help on class Decimal in module decimal:

class Decimal(builtins.object)
  |  Decimal(value="0", context=None): Construct a new Decimal object.
  |  value can be an integer, string, tuple, or another Decimal object.
  |  If no value is given, return Decimal('0'). The context does not affect
  |  the conversion and is only passed to determine if the InvalidOperation
  |  trap is active.
  |


Which says nothing about using floats, and should if it is allowed.  It 
does talk about floats in the from_float method including comments on exact 
representation.


Help on built-in function from_float:

from_float(...) method of builtins.type instance
     from_float(f) - Class method that converts a float to a decimal 
number, exactly.
     Since 0.1 is not exactly representable in binary floating point,
     Decimal.from_float(0.1) is not the same as Decimal('0.1').

         >>> Decimal.from_float(0.1)
         Decimal('0.1000000000000000055511151231257827021181583404541015625')
         >>> Decimal.from_float(float('nan'))
         Decimal('NaN')
         >>> Decimal.from_float(float('inf'))
         Decimal('Infinity')
         >>> Decimal.from_float(float('-inf'))
         Decimal('-Infinity')



So I think the first case should say, it uses from_float() when a floating 
point is used, and explicitly also say to look at the from_float docs.  (or 
better yet include them directly in the class doc string.


As far as Ulps go.  Unless the average error can be reduced I don't see any 
reason to change it.  If the docs say it's uses from_float and from_float 
gives the same result.  I think it's perfectly reasonable and likely to 
answer most users questions where they expect to find the answer.

Cheers,
     Ron





























More information about the Python-ideas mailing list