
Currently, calling the Decimal constructor with an invalid literal (such as Decimal("Fred")) returns a quiet NaN. This was done because the spec appeared to require it (in fact, there are IBM test cases to confirm that behavior). I've discussed this with Mike Cowlishaw (author of the spec and test cases) and he has just clarified that, "... the intent here was not to disallow an exception. The analogy, perhaps, is to a divide-by-zero: the latter raises Invalid Operation and returns a qNaN. The string conversion is similar. (Again, in some implementations/languages, the result after such an exception is not available.) I'll see if I can clarify that, at least making it clear that Invalid Operation is OK at that point." So, my question for the group is whether to: * leave it as-is * raise a ValueError just like float('abc') or int('abc') * raise an Invalid Operation and return a quiet NaN. Either of the last two involves editing the third-party test cases which I am loathe to do. The second is the most Pythonic but does not match Mike's clarification. The third keeps within context of the spec but doesn't bode well for Decimal interacting well with the rest of python. The latter issue is unavoidable to some degree because no other python numeric type has context sensitive operations, settable traps, and result flags. A separate question is determining the default precision. Currently, it is set at 9 which conveniently matches the test cases, the docstring examples, and examples in the spec. It is also friendly to running time. Tim had suggested that 20 or so would handle many user requirements without needing a context change. Mike had suggested default single and double precision matching those proposed in 754R. The rationale behind those sizes has nothing to do with use cases; rather, they were chosen so that certain representations (not the ones we use) fit neatly into byte/word sized multiples (once again showing the hardware orientation of the spec). No matter what the default, the precision is easy to change: >>> getcontext().prec = 42 >>> Decimal(1) / Decimal(7) Decimal("0.142857142857142857142857142857142857142857") Raymond

On Fri, Jul 02, 2004, Raymond Hettinger wrote:
There's another option: allow both options one and two, with option two the default; the test cases can manually set option one, while adding a few extra ones to cover the default usage.
When this was discussed earlier (may have been in private), it was decided to leave this because it's so easy to change. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "Typing is cheap. Thinking is expensive." --Roy Smith, c.l.py

[Raymond Hettinger]
Don't be confused by his odd wording here. There's a concept of "raising" in Python, but not in his spec! Exceptional conditions in the spec are "signaled", not raised. Whether they go on to raise (in the Python sense) a user-visible exception is to be controlled by the specific signaled exceptional condition's trap-enable flag.
Sorry, but I'm really confused now. The current version of the spec defines an exceptional condition specifically for this purpose: """ Conversion syntax This occurs and signals invalid-operation if an string is being converted to a number and it does not conform to the numeric string syntax. The result is [0,qNaN]. """ In other words, the current spec *requires* that trying to convert a nonsense string signal invalid-operation, not merely allows it. AFAICT, "our" code already does that, too(!). Note that I've suggested that some trap-enable flags should be set by default for Python: those for invalid operation, divide by 0, and overflow. So, in my mind (unsullied by reality <wink>): (a) conversion of a nonsense string already signals invalid-operation; and, (b) because the invalid-operation trap-enable flag is set by default in Python, it also already raises an exception.
That last choice is impossible: you can *signal* invalid operation and return a quiet NaN, but if you *raise* something then it's impossible to return anything. We could make InvalidOperation a subclass of ValueError, BTW. That would make sense (InvalidOperation always has to do with an insane input value).
Either of the last two involves editing the third-party test cases which I am loathe to do.
Or, until Mike fixes the test vectors, special-case the snot out of these specific tests in the test driver. That's assuming the test vectors don't actually match the spec now. But I suspect that they do, and that there's a different confusion at work here.
That's one reason I want Python to enable the invalid-operation trap by default. That choice isn't suitable for all apps all the time, but I have no hesitation Pronouncing that it will be suitable for most Python apps most of the time.
That's because Python has blissfully ignored IEEE-754 for its current floating-point operations. 754 requires all of those for binary fp, and a small part of my hope for Decimal is that it nudges Python into coming to grips with that 20-year-old standard. Continuing to ignore it creates problems for scientific programmers too, both those who hate 754 and those who love it. Python's binary fp support will end up looking more like Decimal over time; the fact that Python's binary fp is 20 years out of date isn't a good reason to cripple Decimal too.
I expect that's a red herring. Financial apps are usually dominated by the speed of I/O conversions and of addition, and those all take time proportional to the number of significant decimal digits in the operands, not proportional to precision. Division certainly takes time proportional to precision, but that's a rarer operation. Even after division or multiplication, financial apps will usually round the result back, again reducing the number of significant decimal digits (e.g., your tax forms generally make you round on every line, not accumulate everything to 100 digits and round once at the end).
Tim had suggested that 20 or so would handle many user requirements without needing a context change.
It would be pathetic to ship with a default precision smaller than mid-range hand calculators.
I don't care about that either.
It should be at least 12 (copying HP hand calculators). I believe current COBOL requires 36 decimal digits in its decimal type. VB's Currency type has 19 decimal digits, and its Decimal type has 28 decimal digits. Those are all defaults decided by people who care about applications more than word boundaries <wink>.

Raymond Hettinger wrote:
For the alpha, I've left all of the traps off except for ConversionSyntax.
I agree with Michael Chermside list of traps that should be on by default. As a potential user of Decimal, I'd much rather start off with training wheels *on*. I can always take them off myself later. Going back to Tim Peters' post from a few days ago, Tim Peters wrote:
Tim also wrote, "Errors should never pass silently, unless explicitly silenced." I put high value on Tim's opinions in matters mathematical, and so does Guido: he delegated to Tim on PEP 327. That brings up a point: although marked "Final" in CVS, I never saw an official "Accepted" pronouncement from Tim or Guido. So, just for the record, Tim, do you agree that PEP 327 should be marked Accepted & Final? Michael Chermside wrote:
Tim, do you agree with this list? Any changes? I think the decision of which signals to trap (convert to Python raised exceptions) should be added to the PEP. (Hmmm. There seems to be a mismatch between the PEP and the implementation. The "signals" listed above are listed as "conditions" in the PEP, and according to the PEP, the "invalid-operation" signal covers several conditions: conversion syntax, division impossible & undefined, invalid context, and invalid operation. See <http://www.python.org/peps/pep-0327.html#exceptional-conditions>.) -- David Goodger <http://starship.python.net/~goodger> Python Enhancement Proposal (PEP) Editor <http://www.python.org/peps/>

I'm still of the opinion that the invalid-operation, overflow, and division-by-zero traps should be enabled by default, and the others disabled (the long list of conditions that's been given map many-to-one onto the signals in the spec, and traps are associated with signals, not with conditions; there are only 8 signals in the spec).. At KSR we built a (nearly) 754-conforming FPU, and our initial Fortran and C compilers played along with the standard by disabling all traps by default. That lasted about a month. Customers were rabidly unhappy about it, and I sympathized. Code that's *expecting* NaNs and infinities can *sometimes* live gracefully with them, and more gracefully with them than without them. But like context itself, the "non-stop" mode of running is aimed more at numeric experts than at end users. For example, library code can't allow an exception to propagate to the caller when trying an expected-case optimization that may overflow, and so setting "non-stop" mode is typical in library routines. The second releases of our compilers enabled the same set of signals by default I'm recommending Python enable by default. Nobody complained about that, and very few users changed the defaults. When you find a setting that very few users change, that's the natural setting to make the default. [David Goodger]
That was all done in pvt email. Guido bounced it to me, and I said "sure!". The PEP is in as good a shape as the new-style class PEPs when they were released, and Decimal will have more immediate benefits for more users. The PEP can still be edited after it's "final" <wink>.
participants (4)
-
Aahz
-
David Goodger
-
Raymond Hettinger
-
Tim Peters