[Python-ideas] Python Numbers as Human Concept Decimal System

Tim Peters tim.peters at gmail.com
Sun Mar 9 00:20:39 CET 2014


[Mark Dickinson]
>> Python's decimal module is based on Cowlishaw's
>> standard, not on IEEE 754.

[Guido]
> I wonder what Cowlishaw would say about our current discussion. He is also
> the father of Rexx...

I _expect_ Mike likes the status quo.  Explaining why by analogy:

I'm certain the 754 designers would not like the status quo.  To them
string<->number conversions were "operations", no different in
principle than the operations of, say, addition or root extraction.
And _all_ operations in 754 treat the input(s) as infinitely precise,
and produce an output correctly rounded according to the current
context.  Now that's technically not so for float->string operations
in 754, but that's an irrelevant distraction (754 allowed for weaker
rounding guarantees in that specific context because nobody knows how
to achieve correct rounding efficiently in all cases in that context -
and the members of the 754 committee later said they regretted
allowing this exception).

Bottom line here:  for all the 754 designers, and regardless of the
type of x, Decimal(x) should accept x as exactly correct and produce a
decimal object correctly rounded according to the current context
(including setting context flags appropriately - e.g., signaling the
inexact exception if any information in the input was lost due to
rounding).

Now the specific instance of this Mike did pronounce on is
Decimal(string).  It's obvious that the 754 view is that the
assumed-to-be infinitely precise string literal be rounded to the
current context's precision (etc).  But Mike (in email at the time)
explicitly wanted to retain all the string digits in the returned
decimal object, wholly ignoring the current context.

So that's what Python does. Is Decimal(0.1) really different?  The 754
view, once you're used to it, is utterly predictable:  Whatever the
internal representation of 0.1, it's assumed to be infinitely precise,
and you round its value to the decimal context's current precision.
Mike's view is usually the same, but in one specific case of
construction he thought differently.  My guess is that he'd choose to
be consistent with "the rule" for construction, having made an
exception for it once already, than choose to be consistent with the
other decimal operations.

Or we could hark back to REXX's desire to present an arithmetic that
works the way people learned in school.  I'd try that, except I'm not
sure kids today learn arithmetic in school any more beyond learning
how to push buttons ;-)


More information about the Python-ideas mailing list