[Python-Dev] Re: Decimal data type issues
tim.one at comcast.net
Tue Apr 20 22:30:21 EDT 2004
> Hopefully this is somewhat clearer.
Sorry, it really isn't to me. When you extract "a number" from one of your
databases, what do you get from it, concretely? A triple of (decimal string
with embedded decimal point, integer precision, integer scale)? An integer
value with an integer scale? A decimal string w/o embedded decimal point
and an integer scale? Etc. I gave up then after the first two examples:
> Thus, I would like to create decimal instances that conform to those
> schema -- i.e., they would be rounded appropriately and overflow errors
> generated if they exceeded either the maximum precision or scale. e.g.:
> Decimal('20000.001', precision=4, scale=0) === Decimal('20000')
> Decimal('20000.001', precision=4, scale=0) raises an overflow exception
The inputs on those two lines look identical to me, so I'm left more lost
than before -- you can't really want Decimal('20000.001', precision=4,
scale=0) *both* to return 20000 *and* raise an overflow exception.
In any case, that's not what the IBM standard supports. Context must be
respected in its abstract from-string operation, and maximum precision is a
component of context. If context's precision is 4, then
would round to the most-significant 4 digits (according to the rounding mode
specified in context), and signal both the "inexact" and "rounded"
conditions. What "signal" means: if the trap-enable flags are set in
context for either or both of those conditions, an exception will be raised;
if the trap-enable flags for both of those conditions are clear, then the
inexact-happened and rounded-happened status flags in context are set, and
you can inspect them or not (as you please).
That's what the standard provides. More than that would be extensions to
the standard. The standard is precise about semantics, and it's plenty to
implement (just) all of that at the start.
More information about the Python-Dev