Re: [Python-Dev] Adventures with Decimal

Raymond Hettinger wrote:
IMO, user input (or the full numeric strings in a text data file) is sacred and presumably done for a reason -- the explicitly requested digits should not be throw-away without good reason.
I still don't understand what's so special about the input phase that it should be treated sacredly, while happily desecrating the result of any *other* operation. To my mind, if you were really serious about treating precision as sacred, the result of every operation would be the greater of the precisions of the inputs. That's what happens in C or Fortran - you add two floats and you get a float; you add a float and a double and you get a double; etc.
Truncating/rounding a literal at creation time doesn't work well when you are going to be using those values several times, each with a different precision.
This won't be a problem if you recreate the values from strings each time. You're going to have to be careful anyway, e.g. if you calculate some constants, such as degreesToRadians = pi/180, you'll have to make sure that you recalculate them with the desired precision before rerunning the algorithm.
Remember, the design documents for the spec state a general principle: the digits of a decimal value are *not* significands, rather they are exact and all arithmetic on the is exact with the *result* being subject to optional rounding.
I don't see how this is relevant, because digits in a character string are not "digits of a decimal value" according to what we are meaning by "decimal value" (i.e. an instance of Decimal). In other words, this principle only applies *after* we have constructed a Decimal instance. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing@canterbury.ac.nz +--------------------------------------+

Raymond Hettinger wrote:
IMO, user input (or the full numeric strings in a text data file) is sacred and presumably done for a reason -- the explicitly requested digits should not be throw-away without good reason.
I still don't understand what's so special about the input phase that it should be treated sacredly, while happily desecrating the result of any *other* operation.
The 'difference' here is, with unlimited precision decimal representations, there is no "input phase". The decimal number can represent the value, sign, and exponent in the character string the user provided _exactly_, and indeed it could be implemented using strings as the internal representation -- in which case the 'construction' of a new number is simply a string copy operation. There is no operation taking place as there is no narrowing necessary. This is quite unlike (for example) converting an ASCII string "1.01" to a binary floating-point double which has a fixed precision and no base-5 component. mfc
participants (2)
-
Greg Ewing
-
Mike Cowlishaw