[Python-Dev] Adventures with Decimal
Tim Peters
tim.peters at gmail.com
Fri May 20 02:50:42 CEST 2005
[Raymond Hettinger]
> For brevity, the above example used the context free
> constructor, but the point was to show the consequence
> of a precision change.
Yes, I understood your point. I was making a different point:
"changing precision" isn't needed _at all_ to get surprises from a
constructor that ignores context. Your example happened to change
precision, but that wasn't essential to getting surprised by feeding
strings to a context-ignoring Decimal constructor. In effect, this
creates the opportunity for everyone to get suprised by something only
experts should need to deal with.
There seems to be an unspoken "wow that's cool!" kind of belief that
because Python's Decimal representation is _potentially_ unbounded,
the constructor should build an object big enough to hold any argument
exactly (up to the limit of available memory). And that would be
appropriate for, say, an unbounded rational type -- and is appropriate
for Python's unbounded integers.
But Decimal is a floating type with fixed (albeit user-adjustable)
precision, and ignoring that mixes arithmetic models in a
fundamentally confusing way. I would have no objection to a named
method that builds a "big as needed to hold the input exactly" Decimal
object, but it shouldn't be the behavior of the
everyone-uses-it-constructor. It's not an oversight that the IBM
standard defines no operations that ignore context (and note that
string->float is a standard operation): it's trying to provide a
consistent arithmetic, all the way from input to output. Part of
consistency is applying "the rules" everywhere, in the absence of
killer-strong reasons to ignore them.
Back to your point, maybe you'd be happier if a named (say)
apply_context() method were added? I agree unary plus is a
funny-looking way to spell it (although that's just another instance
of applying the same rules to all operations).
More information about the Python-Dev
mailing list