Re: [Python-Dev] [Python-checkins] python/nondist/peps pep-0343.txt, 1.11, 1.12

+ def sin(x): + "Return the sine of x as measured in radians." + do with_extra_precision(): + i, lasts, s, fact, num, sign = 1, 0, x, 1, x, 1 + while s != lasts: + lasts = s + i += 2 + fact *= i * (i-1) + num *= x * x + sign *= -1 + s += num / fact * sign + return +s
One more change: The final "return +s" should be unindented. It should be at the same level as the "do with_extra_precision()". The purpose of the "+s" is to force the result to be rounded back to the *original* precision. This nuance is likely to be the bane of folks who shift back and forth between different levels of precision. The following example shows the kind of oddity that can arise when working with quantities that have not been rounded to the current precision:
from decimal import getcontext, Decimal as D getcontext().prec = 3 D('3.104') + D('2.104') Decimal("5.21") D('3.104') + D('0.000') + D('2.104') Decimal("5.20")
Raymond

[Raymond Hettinger]
... One more change: The final "return +s" should be unindented. It should be at the same level as the "do with_extra_precision()". The purpose of the "+s" is to force the result to be rounded back to the *original* precision.
This nuance is likely to be the bane of folks who shift back and forth between different levels of precision.
Well, a typical user will never change precision most of the time. Of the remaining uses, most will set precision once at the start of the program, and never change it again. Library authors may change precision frequently, but they should be experts.
The following example shows the kind of oddity that can arise when working with quantities that have not been rounded to the current precision:
from decimal import getcontext, Decimal as D getcontext().prec = 3 D('3.104') + D('2.104') Decimal("5.21") D('3.104') + D('0.000') + D('2.104') Decimal("5.20")
I think it shows more why it was a mistake for the decimal constructor to extend the standard (the string->decimal operation in the standard respects context settings; the results differ here because D(whatever) ignores context settings; having a common operation ignore context is ugly and error-prone).

[Raymond]
The following example shows the kind of oddity that can arise when working with quantities that have not been rounded to the current precision:
from decimal import getcontext, Decimal as D getcontext().prec = 3 D('3.104') + D('2.104') Decimal("5.21") D('3.104') + D('0.000') + D('2.104') Decimal("5.20")
[Tim]
I think it shows more why it was a mistake for the decimal constructor to extend the standard (the string->decimal operation in the standard respects context settings; the results differ here because D(whatever) ignores context settings;
For brevity, the above example used the context free constructor, but the point was to show the consequence of a precision change. That oddity occurs even in the absence of a call to the Decimal constructor. For instance, using the context aware constructor, Context.create_decimal(), we get the same result when switching precision:
from decimal import getcontext context = getcontext() x = context.create_decimal('3.104') y = context.create_decimal('2.104') z = context.create_decimal('0.000') context.prec = 3 x + y Decimal("5.21") x + z + y Decimal("5.20")
The whole point of the unary plus operation in the decimal module is to force a rounding using the current context. This needs to be a standard practice whenever someone is changing precision in midstream. Most folks won't (or shouldn't) be doing that, but those who do (as they would in the PEP's use case) need a unary plus after switching precision. As for why the normal Decimal constructor is context free, PEP 327 indicates discussion on the subject, but who made the decision and why is not clear. Raymond

On 5/18/05, Raymond Hettinger <python@rcn.com> wrote:
from decimal import getcontext context = getcontext() x = context.create_decimal('3.104') y = context.create_decimal('2.104') z = context.create_decimal('0.000') context.prec = 3 x + y Decimal("5.21") x + z + y Decimal("5.20")
My point here is to always remind everybody that Decimal solves the problem with binary floating point, but not with representation issues. If you don't have enough precision (for example to represent one third), you'll get misterious results. That's why, IMO, the Spec provides two traps, one for Rounded, and one for Inexact, to be aware of what exactly is happening.
As for why the normal Decimal constructor is context free, PEP 327 indicates discussion on the subject, but who made the decision and why is not clear.
There was not decision. Originally the context didn't get applied in creation time. And then, the situation arised where it would be nice to be able to apply it in creation time (for situations when it would be costly to not do it), so a method in the context was born. . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/

[Tim suggesting that I'm clueless and dazzled by sparkling lights]
There seems to be an unspoken "wow that's cool!" kind of belief that because Python's Decimal representation is _potentially_ unbounded, the constructor should build an object big enough to hold any argument exactly (up to the limit of available memory). And that would be appropriate for, say, an unbounded rational type -- and is appropriate for Python's unbounded integers.
I have no such thoughts but do strongly prefer the current design. I recognize that it allows a user to specify an input at a greater precision than the current context (in fact, I provided the example). The overall design of the module and the spec is to apply context to the results of operations, not their inputs. In particular, the spec recognizes that contexts can change and rather than specifying automatic or implicit context application to all existing values, it provides the unary plus operation so that such an application is explicit. The use of extra digits in a calculation is not invisible as the calculation will signal Rounded and Inexact (if non-zero digits are thrown away). One of the original motivating examples was "schoolbook" arithmetic where the input string precision is incorporated into the calculation. IMO, input truncation/rounding is inconsistent with that motivation. Likewise, input rounding runs contrary to the basic goal of eliminating representation error. With respect to integration with the rest of Python (everything beyond that spec but needed to work with it), I suspect that altering the Decimal constructor is fraught with issues such as the string-to-decimal-to-string roundtrip becoming context dependent. I haven't thought it through yet but suspect that it does not bode well for repr(), pickling, shelving, etc. Likewise, I suspect that traps await multi-threaded or multi-context apps that need to share data. Also, adding another step to the constructor is not going to help the already disasterous performance. I appreciate efforts to make the module as idiot-proof as possible. However, that is a pipe dream. By adopting and exposing the full standard instead of the simpler X3.274 subset, using the module is a non-trivial exercise and, even for experts, is a complete PITA. Even a simple fixed-point application (money, for example) requires dealing with quantize(), normalize(), rounding modes, signals, etc. By default, outputs are not normalized so it is difficult even to recognize what a zero looks like. Just getting output without exponential notation is difficult. If someone wants to craft another module to wrap around and candy-coat the Decimal API, I would be all for it. Just recognize that the full spec doesn't have a beginner mode -- for better or worse, we've simulated a hardware FPU. Lastly, I think it is a mistake to make a change at this point. The design of the constructor survived all drafts of the PEP, comp.lang.python discussion, python-dev discussion, all early implementations, sandboxing, the Py2.4 alpha/beta, cookbook contributions, and several months in the field. I say we document a recommendation to use Context.create_decimal() and get on with life. Clueless in Boston P.S. With 28 digit default precision, the odds of this coming up in practice are slim (when was the last time you typed in a floating point value with more than 28 digits; further, if you had, would it have ruined your day if your 40 digits were not first rounded to 28 before being used). IOW, bug tracker lists hundreds of bigger fish to fry without having to change a published API (pardon the mixed metaphor).

Sorry, I simply can't make more time for this. Shotgun mode: [Raymond]
I have no such thoughts but do strongly prefer the current design.
How can you strongly prefer it? You asked me whether I typed floats with more than 28 significant digits. Not usually <wink>. Do you? If you don't either, how can you strongly prefer a change that makes no difference to what you do?
... The overall design of the module and the spec is to apply context to the results of operations, not their inputs.
But string->float is an _operation_ in the spec, as it has been since 1985 in IEEE-754 too. The float you get is the result of that operation, and is consistent with normal numeric practice going back to the first time Fortran grew a distinction between double and single precision. There too the common practice was to write all literals as double-precision, and leave it to the compiler to round off excess bits if the assignment target was of single precision. That made it easy to change working precision via fiddling a single "implicit" (a kind of type declaration) line. The same kind of thing would be pleasantly applicable for decimal too -- if the constructor followed the rules.
In particular, the spec recognizes that contexts can change |> and rather than specifying automatic or implicit context application to all existing values, it provides the unary plus operation so that such an application is explicit. The use of extra digits in a calculation is not invisible as the calculation will signal Rounded and Inexact (if non-zero digits are thrown away).
Doesn't change that the standard rigorously specifies how strings are to be converted to decimal floats, or that our constructor implementation doesn't do that.
One of the original motivating examples was "schoolbook" arithmetic where the input string precision is incorporated into the calculation.
Sorry, doesn't ring a bell to me. Whose example was this?
IMO, input truncation/rounding is inconsistent with that motivation.
Try keying more digits into your hand calculator than it can hold <0.5 wink>.
Likewise, input rounding runs contrary to the basic goal of eliminating representation error.
It's no surprise that an exact value containing more digits than current precision gets rounded. What _is_ surprising is that the decimal constructor doesn't follow that rule, instead making up its own rule. It's an ugly inconsistency at best.
With respect to integration with the rest of Python (everything beyond that spec but needed to work with it), I suspect that altering the Decimal constructor is fraught with issues such as the string-to-decimal-to-string roundtrip becoming context dependent.
Nobody can have a reasonable expectation that string -> float -> string is an identity for any fixed-precision type across all strings. That's just unrealistic. You can expect string -> float -> string to be an identity if the string carries no more digits than current precision. That's how a bounded type works. Trying to pretend it's not bounded in this one case is a conceptual mess.
I haven't thought it through yet but suspect that it does not bode well for repr(), pickling, shelving, etc.
The spirit of the standard is always to deliver the best possible approximation consistent with current context. Unpickling and unshelving should play that game too. repr() has a special desire for round-trip fidelity.
Likewise, I suspect that traps await multi-threaded or multi- context apps that need to share data.
Like what? Thread-local context precision is a reality here, going far beyond just string->float.
Also, adding another step to the constructor is not going to help the already disasterous performance.
(1) I haven't found it to be a disaster. (2) Over the long term, the truly speedy implementations of this standard will be limited to a fixed set of relatively small precisions (relative to, say, 1000000, not to 28 <wink>). In that world it would be unboundedly more expensive to require the constructor to save every bit of every input: rounding string->float is a necessity for speedy operation over the long term.
I appreciate efforts to make the module as idiot-proof as possible.
That's not my interest here. My interest is in a consistent, std-conforming arithmetic, and all fp standards since IEEE-754 recognized that string->float is "an operation" much like every other fp operation. Consistency helps by reducing complexity. Most users will never bump into this, and experts have a hard enough job without gratuitous deviations from a well-defined spec. What's the _use case_ for carrying an unbounded amount of information into a decimal instance? It's going to get lost upon the first operation anyway.
However, that is a pipe dream. By adopting and exposing the full standard instead of the simpler X3.274 subset, using the module is a non-trivial exercise and, even for experts, is a complete PITA.
Rigorous numeric programming is a difficult art. That's life. The many exacting details in the standard aren't the cause of that, they're a distillation of decades of numeric experience by bona fide numeric experts. These are the tools you need to do a rigorous job -- and most users can ignore them completely, or at worst set precision once at the start and forget it. _Most_ of the stuff (by count) in the standard is for the benefit of expert library authors, facing a wide variety of externally imposed requirements.
Even a simple fixed-point application (money, for example) requires dealing with quantize(), normalize(), rounding modes, signals, etc.
I don't know why you'd characterize a monetary application as "simple". To the contrary, they're as demanding as they come. For example, requirements for bizarre rounding come with that territory, and the standard exposes tools to _help_ deal with that. The standard didn't invent rounding modes, it recognizes that needing to deal with them is a fact of life, and that it's much more difficult to do without any help from the core arithmetic. So is needing to deal with many kinds of exceptional conditions, and in different ways depending on the app -- that's why all that machinery is there.
By default, outputs are not normalized so it is difficult even to recognize what a zero looks like.
You're complaining about a feature there <wink>. That is, the lack of normalization is what makes 1.10 the result of 2.21 - 1.11, rather than 1.1 or 1.100000000000000000000000000. 1.10 is what most people expect.
Just getting output without exponential notation is difficult.
That's a gripe I have with the std too. Its output formats are too simple-minded and few. I had the same frustration using REXX. Someday the %f/%g/%e format codes should learn how to deal with decimals, and that would be pleasant enough for me.
If someone wants to craft another module to wrap around and candy-coat the Decimal API, I would be all for it.
For example, Facundo is doing that with a money class, yes? That's fine. The standard tries to support many common arithmetic needs, but big as it is, it's just a start.
Just recognize that the full spec doesn't have a beginner mode -- for better or worse, we've simulated a hardware FPU.
I haven't seen a HW FPU with unbounded precision, or one that does decimal arithmetic. Apart from the limited output modes, I have no reason to suspect that a beginner will have any particular difficulty with decimal. They don't have to know anything about signals and traps, rounding modes or threads, etc etc -- right out of the box, except for output fomat it acts very much like a high-end hand calculator.
Lastly, I think it is a mistake to make a change at this point.
It's a worse mistake to let a poor decision slide indefinitely -- it gets harder & harder to change it over time. Heck, to listen to you, decimal is so bloody complicated nobody could possibly be using it now anyway <wink>.
The design of the constructor survived all drafts of the PEP, comp.lang.python discussion, python-dev discussion, all early implementations, sandboxing, the Py2.4 alpha/beta, cookbook contributions, and several months in the field.
So did every other aspect of Python you dislike now <0.3 wink>. It never occurred to me that the implementation _wouldn't_ follow the spec in its treatment of string->float. I whined about that when I discovered it, late in the game. A new, conforming string->float method was added then, but for some reason (or no reason) I don't recall, the constructor wasn't changed. That was a mistake.
I say we document a recommendation to use Context.create_decimal() and get on with life.
....
P.S. With 28 digit default precision, the odds of this coming up in practice are slim (when was the last time you typed in a floating point value with more than 28 digits; further, if you had, would it have ruined your day if your 40 digits were not first rounded to 28 before being used).
Depends on the app, of course. More interesting is the day when someone ports an app from a conforming implementation of the standard, sets precision to (say) 8, and gets different results in Python despite that the app stuck solely to standard operations. Of course that can be a genuine disaster for a monetary application -- extending standards in non-obvious ways imposes many costs of its own, but they fall on users, and aren't apparent at first. I want to treat the decimal module as if the standard it purports to implement will succeed.

I know I should stay out of here, but isn't Decimal() with a string literal as argument a rare case (except in examples)? It's like float() with a string argument -- while you *can* write float("1.01"), nobody does that. What people do all the time is parse a number out of some larger context into a string, and then convert the string to a float by passing it to float(). I assume that most uses of the Decimal() constructor will be similar. In that case, it makes total sense to me that the context's precision should be used, and if the parsed string contains an insane number of digits, it will be rounded. I guess the counter-argument is that because we don't have Decimal literals, Decimal("12345") is used as a pseudo-literal, so it actually occurs more frequently than float("12345"). Sure. But the same argument applies: if I write a floating point literal in Python (or C, or Java, or any other language) with an insane number of digits, it will be rounded. So, together with the 28-digit default precision, I'm fine with changing the constructor to use the context by default. If you want all the precision given in the string, even if it's a million digits, set the precision to the length of the string before you start; that's a decent upper bound. :-) -- --Guido van Rossum (home page: http://www.python.org/~guido/)

[Guido van Rossum]
I know I should stay out of here,
Hey, it's still your language <wink>.
but isn't Decimal() with a string literal as argument a rare case (except in examples)? It's like float() with a string argument -- while you *can* write float("1.01"), nobody does that. What people do all the time is parse a number out of some larger context into a string, and then convert the string to a float by passing it to float(). I assume that most uses of the Decimal() constructor will be similar.
I think that's right. For example, currency exchange rates, and stock prices, are generally transmitted as decimal strings now, and those will get fed to a Decimal constructor. OTOH, in scientific computing it's common to specify literals to very high precision (like 40 decimal digits). Things like pi, e, sqrt(2), tables of canned numeric quadrature points, canned coefficients for polynomial approximations of special functions, etc. The authors don't expect "to get" all they specify, what they expect is that various compilers on various platforms will give them as much precision as they're capable of using efficiently. Rounding is expected then, and indeed pragmatically necessary (carrying precision beyond that natively supported comes with high runtime costs -- and that can be equally true of Decimal literals carried with digits beyond context precision: the standard requires that results be computed "as if to infinite precision then rounded once" using _all_ digits in the inputs).
In that case, it makes total sense to me that the context's precision should be used, and if the parsed string contains an insane number of digits, it will be rounded.
That's the IBM standard's intent (and mandatory in its string->float operation).
I guess the counter-argument is that because we don't have Decimal literals, Decimal("12345") is used as a pseudo-literal, so it actually occurs more frequently than float("12345"). Sure. But the same argument applies: if I write a floating point literal in Python (or C, or Java, or any other language) with an insane number of digits, it will be rounded.
Or segfault <0.9 wink>.
So, together with the 28-digit default precision, I'm fine with changing the constructor to use the context by default. If you want all the precision given in the string, even if it's a million digits, set the precision to the length of the string before you start; that's a decent upper bound. :-)
That is indeed the intended way to do it. Note that this also applies to integers passed to a Decimal constructor. Maybe it's time to talk about an unbounded rational type again <ducks>.

I sense a religious fervor about this so go ahead and do whatever you want. Please register my -1 for the following reasons: a.) It re-introduces representation error into a module that worked so hard to overcome that very problem. The PEP explicitly promises that a transformation from a literal involves no loss of information. Likewise, it promises that "context just affects operations' results". b.) It is inconsistent with the idea of having the input specify its own precision: http://www2.hursley.ibm.com/decimal/decifaq1.html#tzeros c.) It is both untimely and unnecessary. The module is functioning according to its tests, the specification test suite, and the PEP. Anthony should put his foot down as this is NOT a bugfix, it is a change in concept. The Context.create_decimal() method already provides a standard conforming implementation of the to-number conversion. http://www.python.org/peps/pep-0327.html#creating-from-context . d.) I believe it will create more problems than it would solve. If needed, I can waste an afternoon coming up with examples. Likewise, I think it will make the module more difficult to use (esp. when experimenting with the effect of results of changing precision). e.) It does not eliminate the need to use the plus operation to force rounding/truncation when switching precision. f.) To be consistent, one would need to force all operation inputs to have the context applied before their use. The standard specifically does not do this and allows for operation inputs to be of a different precision than the current context (that is the reason for the plus operation). g.) It steers people in the wrong direction. Increasing precision is generally preferable to rounding or truncating explicit inputs. I included two Knuth examples in the docs to show the benefits of bumping up precision when needed. h.) It complicates the heck out of storage, retrieval, and input. Currently, decimal objects have a meaning independent of context. With the proposed change, the meaning becomes context dependent. i.) After having been explicitly promised by the PEP, discussed on the newsgroup and python-dev, and released to the public, a change of this magnitude warrants a newsgroup announcement and a comment period. A use case: ----------- The first use case that comes to mind is in the math.toRadians() function. When originally posted, there was an objection that the constant degToRad was imprecise to the last bit because it was expressed as the ratio of two literals that compiler would have rounded, resulting in a double rounding. Link to rationale for the spec: ------------------------------- http://www2.hursley.ibm.com/decimal/IEEE-cowlishaw-arith16.pdf See the intro to section 4 which says: The digits in decimal are not significands; rather, the numbers are exact. The arithmetic on those numbers is also exact unless rounding to a given precision is specified. Link to the discussion relating decimal design rationale to schoolbook math ------------------------------------------------------------------------ --- I can't find this link. If someone remembers, please post it. Okay, I've said my piece. Do what you will. Raymond

Addenda: j.) The same rules would need to apply to all forms of the Decimal contructor, so Decimal(someint) would also need to truncate/round if it has more than precision digits -- likewise with Decimal(fromtuple) and Decimal(fromdecimal). All are problematic. Integer conversions are expected to be exact but may not be after the change. Conversion from another decimal should be idempotent but implicit rounding/truncation will break that. The fromtuple/totuple round-trip can get broken. You generally specify a tuple when you know exactly what you want. k.) The biggest client of all these methods is the Decimal module itself. Throughout the implementation, the code calls the Decimal constructor to create intermediate values. Every one of those calls would need to be changed to specify a context. Some of those cases are not trivially changed (for instance, the hash method doesn't have a context but it needs to check to see if a decimal value is exactly an integer so it can hash to that value). Likewise, how do you use a decimal value for a dictionary key when the equality check is context dependent (change precision and lose the ability to reference an entry)? Be careful with this proposed change. It is a can of worms. Better yet, don't do it. We already have a context aware constructor method if that is what you really want. Raymond

Raymond Hettinger wrote:
Be careful with this proposed change. It is a can of worms. Better yet, don't do it. We already have a context aware constructor method if that is what you really want.
And don't forgot that 'context-aware-construction' can also be written: val = +Decimal(string_repr) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://boredomandlaziness.blogspot.com

On Fri, May 20, 2005, Raymond Hettinger wrote:
k.) The biggest client of all these methods is the Decimal module itself. Throughout the implementation, the code calls the Decimal constructor to create intermediate values. Every one of those calls would need to be changed to specify a context. Some of those cases are not trivially changed (for instance, the hash method doesn't have a context but it needs to check to see if a decimal value is exactly an integer so it can hash to that value). Likewise, how do you use a decimal value for a dictionary key when the equality check is context dependent (change precision and lose the ability to reference an entry)?
I'm not sure this is true, and if it is true, I think the Decimal module is poorly implemented. There are two uses for the Decimal() constructor: * copy constructor for an existing Decimal instance (or passing in a tuple directly to mimic the barebones internal) * conversion constructor for other types, such as string Are you claiming that the intermediate values are being constructed as strings and then converted back to Decimal objects? Is there something else I'm missing? I don't think Tim is claiming that the copy constructor needs to obey context, just string conversions. Note that comparison is not context-dependent, because context only applies to results of operations, and the spec's comparison operator (equivalent to cmp()) only returns (-1,0,1) -- guaranteed to be within the precision of any context. ;-) Note that hashing is not part of the standard, so whatever makes most sense in a Pythonic context would be appropriate. It's perfectly reasonable for Decimal's __int__ method to be unbounded because Python ints are unbounded. All these caveats aside, I don't have a strong opinion about what we should do. Overall, my sentiments are with Tim that we should fix this, but my suspicion is that it probably doesn't matter much. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "The only problem with Microsoft is they just have no taste." --Steve Jobs

On 5/20/05, Tim Peters <tim.peters@gmail.com> wrote:
That's not my interest here. My interest is in a consistent,
Point. Every time I explain Decimal, I have to say "always the context is applied EXCEPT at construction time".
If someone wants to craft another module to wrap around and candy-coat the Decimal API, I would be all for it.
For example, Facundo is doing that with a money class, yes? That's
Yes, and so far it's pretty much "Hey! Let's take Decimal and define how we configure and use it". . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/

[Raymond Hettinger]
For brevity, the above example used the context free constructor, but the point was to show the consequence of a precision change.
Yes, I understood your point. I was making a different point: "changing precision" isn't needed _at all_ to get surprises from a constructor that ignores context. Your example happened to change precision, but that wasn't essential to getting surprised by feeding strings to a context-ignoring Decimal constructor. In effect, this creates the opportunity for everyone to get suprised by something only experts should need to deal with. There seems to be an unspoken "wow that's cool!" kind of belief that because Python's Decimal representation is _potentially_ unbounded, the constructor should build an object big enough to hold any argument exactly (up to the limit of available memory). And that would be appropriate for, say, an unbounded rational type -- and is appropriate for Python's unbounded integers. But Decimal is a floating type with fixed (albeit user-adjustable) precision, and ignoring that mixes arithmetic models in a fundamentally confusing way. I would have no objection to a named method that builds a "big as needed to hold the input exactly" Decimal object, but it shouldn't be the behavior of the everyone-uses-it-constructor. It's not an oversight that the IBM standard defines no operations that ignore context (and note that string->float is a standard operation): it's trying to provide a consistent arithmetic, all the way from input to output. Part of consistency is applying "the rules" everywhere, in the absence of killer-strong reasons to ignore them. Back to your point, maybe you'd be happier if a named (say) apply_context() method were added? I agree unary plus is a funny-looking way to spell it (although that's just another instance of applying the same rules to all operations).

On Wed, May 18, 2005, Tim Peters wrote:
I think it shows more why it was a mistake for the decimal constructor to extend the standard (the string->decimal operation in the standard respects context settings; the results differ here because D(whatever) ignores context settings; having a common operation ignore context is ugly and error-prone).
Not sure what the "right" answer is, but I wanted to stick my oar in to say that I think that Decimal has not been in the field long enough or widely-enough used that we should feel that the API has been set in stone. If there's agreement that a mistake was made, let's fix it! -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "And if that makes me an elitist...I couldn't be happier." --JMS

On 5/18/05, Aahz <aahz@pythoncraft.com> wrote:
Not sure what the "right" answer is, but I wanted to stick my oar in to say that I think that Decimal has not been in the field long enough or widely-enough used that we should feel that the API has been set in stone. If there's agreement that a mistake was made, let's fix it!
+1. BTW, it's worth noting that for Money (http://sourceforge.net/projects/pymoney) we decided to apply the context at creation time.... . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/

Not sure what the "right" answer is, but I wanted to stick my oar in to say that I think that Decimal has not been in the field long enough or widely-enough used that we should feel that the API has been set in stone. If there's agreement that a mistake was made, let's fix it!
There is not agreement. I prefer the current behavior and think changing it would introduce more problems than it would solve. Further, the API currently provides both context aware and context free construction -- all the tools needed are already there. Let's leave this alone and simply document the best practices (using unary plus after a precision change and constructing using create_decimal whenever context is important to construction). Raymond

Tim Peters wrote:
[Raymond Hettinger]
from decimal import getcontext, Decimal as D getcontext().prec = 3 D('3.104') + D('2.104')
Decimal("5.21")
D('3.104') + D('0.000') + D('2.104')
Decimal("5.20")
the results differ here because D(whatever) ignores context settings; having a common operation ignore context is ugly and error-prone).
I don't see it's because of that. Even if D(whatever) didn't ignore the context settings, you'd get the same oddity if the numbers came from somewhere else with a different precision. I'm very uncomfortable about the whole idea of a context-dependent precision. It just seems to be asking for trouble. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing@canterbury.ac.nz +--------------------------------------+

[Greg Ewing]
I don't see it's because of that. Even if D(whatever) didn't ignore the context settings, you'd get the same oddity if the numbers came from somewhere else with a different precision.
Most users don't change context precision, and in that case there is no operation defined in the standard that can _create_ a decimal "with different precision". Python's Decimal constructor, however, can (Python's Decimal constructor performs an operation that's not in the standard -- it's a Python-unique extension to the standard).
I'm very uncomfortable about the whole idea of a context-dependent precision. It just seems to be asking for trouble.
If you're running on a Pentium box, you're using context-dependent precision a few million times per second. Most users will be as blissfully unaware of decimal's context precsion as you are of the Pentium FPU's context precision. Most features in fp standards are there for the benefit of experts. You're not required to change context; those who need such features need them desperately, and don't care whether you think they should <wink>. An alternative is a God-awful API that passes a context object explicitly to every operation. You can, e.g., kiss infix "+" goodbye then. Some implementations of the standard do exactly that. You might want to read the standard before getting carried off by gut reactions: http://www2.hursley.ibm.com/decimal/
participants (7)
-
Aahz
-
Facundo Batista
-
Greg Ewing
-
Guido van Rossum
-
Nick Coghlan
-
Raymond Hettinger
-
Tim Peters