I don't understand what's unclear -
Obviously ;-) But without carefully constructed concrete use cases, what you say continues to remain unclear to me.
I was suggesting there should be an easy way to have the exact result for all operations on Decimal operands that have exact results representable as Decimal.
See above. It's very much a feature of the IEEE 754/854 standards (of which family Python's decimal model is a part) that if an exact result is exactly representable, then that's the result you get.
But that's apparently not what you're asking for.
... Incidentally, I also noticed the procedure suggested by the documentation for doing fixed point arithmetic can result in incorrect double rounding in some situations:
(D('1.01')*D('1.46')).quantize(TWOPLACES) # true result is 1.4746
Are you using defaults, or did you change something? Here under 3.9.0, showing everything:
>>> from decimal import Decimal as D >>> TWOPLACES = D('0.01') >>> (D('1.01')*D('1.46')).quantize(TWOPLACES) Decimal('1.47')
Different result than you showed.
If by "Decimal" you mean Python's "decimal.Decimal" class, the constructor ignores the context precision, and retains all the info passed to it, So there's no need at all to change Decimal precision.
This only applies to the constructor though, not the arithmetic operators.
Which is why I said "the constructor" ;-)
Here's a guess at what you want:
That works for the specific use case of converting Fraction to Decimal-
That's the only use case your original post mentioned.
I didn't know that Decimal supported E notation in the constructor, so I thought I would have to multiply or divide by a power of ten directly [which would therefore have to be rounded]... going through the string constructor seems extremely inefficient and inelegant, though,
Not at all! Conversion between decimal strings and decimal.Decimal objects is close to trivial. It's straightforward and linear-time (in the number of significant digits). It's converting between decimal strings and binary-based representations (float or int) that's hairy and potentially slow.
And there is no rounding needed for any power-of-10 input, regardless of context precision, provided only you don't over/under-flow the exponent range.
I'd like a way to losslessly multiply or divide a decimal by a power of ten at least... a sort of decimal equivalent to ldexp.
Again, without concrete examples, that's clear as mud to me. The power of 10 (ignoring over/under-flow) can always be gotten exactly. If you go on to multiply/divide by that, _then_ rounding can occur, but only if the other operand has more significant digits than the current context precision allows for.
But if you can't live with that, you don't want the `decimal` module _at all_. The entire model, from the ground up, is user-settable but _fixed_ working precision.
Nevertheless, you can get that specific effect via passing a tuple to the constructor:
d = D('1.2345') d
import decimal decimal.getcontext().prec = 2 d
d * 1000000 # multiplying rounds to context precision
DecimalTuple(sign=0, digits=(1, 2, 3, 4, 5), exponent=-4)
D((_.sign, _.digits, _.exponent + 6)) # constructor does not round
So that's how to implement a ldexp-alike (although I doubt there's much real use for such a thing): convert the input to a tuple, add the exponent delta to the tuple's exponent field, and give it back to the constructor again.
Why I say I doubt there's much real use: if you create, by _any_ means, a Decimal with more significant digits than current working precision, the "extra" digits will just get rounded away by any operation at all. For example, following the last line of the example above, even by unary plus:
Again, the concept of a _fixed_ (albeit user-settable) working precision is deep in the module's bones.