Exact decimal multiplication and division operations
I was making a "convert Fraction to Decimal, exactly if possible" function and ran into a wall: it's not possible to do some of the necessary operations with exact precision in decimal:  multiplication  division where the result can be represented exactly [the divisor is an integer whose prime factors are only two and five, or a rational number whose numerator qualifies] I assume there's some way around it that I haven't spent enough time to figure out [create a temporary context with sufficint digits for multiplication, and work out the reciprocal power of 10 by hand to use this multiplication to implement division], but I feel like these exact operations should be supported in the standard library.
On Thu, Oct 08, 2020 at 11:58:04AM 0400, Random832 wrote:
I was making a "convert Fraction to Decimal, exactly if possible" function and ran into a wall: it's not possible to do some of the necessary operations with exact precision in decimal:
Of course you can't do them with *exact* precision in Decimal, because that only has a fixed precision. (Userconfigurable, but fixed.) py> from decimal import Decimal py> from fractions import Fraction py> x = Fraction(1, 3) py> Decimal(x.numerator)/x.denominator Decimal('0.3333333333333333333333333333') Is that not sufficient? That is as close to exact as possible with the default Decimal context. If you want more precision, you can adjust the context. py> from decimal import localcontext py> with localcontext() as ctx: ... ctx.prec = 50 ... Decimal(1)/3 ... Decimal('0.33333333333333333333333333333333333333333333333333') I presume you understand that most fractions can not be represented as *exact* Decimals, no matter how many digits of precision you set. So I don't quite understand what you are trying to do, or why Decimal doesn't already do what you are trying to do. It might help if you can show an example of what you would like to do, but currently can't. Related: why doesn't Decimal support direct conversion from Fraction?  Steve
On Fri, Oct 9, 2020 at 3:43 AM Steven D'Aprano
On Thu, Oct 08, 2020 at 11:58:04AM 0400, Random832 wrote:
I was making a "convert Fraction to Decimal, exactly if possible" function and ran into a wall: it's not possible to do some of the necessary operations with exact precision in decimal:
Of course you can't do them with *exact* precision in Decimal, because that only has a fixed precision. (Userconfigurable, but fixed.)
py> from decimal import Decimal py> from fractions import Fraction py> x = Fraction(1, 3) py> Decimal(x.numerator)/x.denominator Decimal('0.3333333333333333333333333333')
You trimmed off the part where the OP said that the denominator would be only multiples of 2 and/or 5 :)
Fraction(1, 2) ** 53 * Fraction(1, 5) ** 47 Fraction(1, 6400000000000000000000000000000000000000000000000) Decimal("314159265358979323") / _.denominator Decimal('4.908738521234051921875E32')
In theory, this should be precisely representable in decimal, but how many digits would you need? ChrisA
On Fri, Oct 09, 2020 at 03:53:24AM +1100, Chris Angelico wrote:
On Fri, Oct 9, 2020 at 3:43 AM Steven D'Aprano
wrote: On Thu, Oct 08, 2020 at 11:58:04AM 0400, Random832 wrote:
I was making a "convert Fraction to Decimal, exactly if possible" function and ran into a wall: it's not possible to do some of the necessary operations with exact precision in decimal:
Of course you can't do them with *exact* precision in Decimal, because that only has a fixed precision. (Userconfigurable, but fixed.)
py> from decimal import Decimal py> from fractions import Fraction py> x = Fraction(1, 3) py> Decimal(x.numerator)/x.denominator Decimal('0.3333333333333333333333333333')
You trimmed off the part where the OP said that the denominator would be only multiples of 2 and/or 5 :)
I trimmed it because I didn't understand the relevance. If the denominator is an exact multiple of only 2 and 5, then I think the conversion will be exact if the precision is sufficient.
Fraction(1, 2) ** 53 * Fraction(1, 5) ** 47 Fraction(1, 6400000000000000000000000000000000000000000000000) Decimal("314159265358979323") / _.denominator Decimal('4.908738521234051921875E32')
In theory, this should be precisely representable in decimal, but how many digits would you need?
The brute force way of doing that will be to trap on Inexact and do the division. If it succeeds, you are done; otherwise double the precision and try again. Repeat until you either succeed, or reach the maximum possible precision. I don't know of any clever way of predicting the number of decimal places a fraction will take as a decimal short of actually doing the division. It does not depend on only the denominator, you also need to know the numerator: Decimal(1/4) # requires 2 decimal places Decimal(2/4) # requires only 1 decimal place  Steve
On Thu, Oct 8, 2020, at 13:12, Steven D'Aprano wrote:
I trimmed it because I didn't understand the relevance. If the denominator is an exact multiple of only 2 and 5, then I think the conversion will be exact if the precision is sufficient.
I wanted the conversion to be exact irrespective of the precision, not least because I'm not sure how to determine what the needed precision would be. After I looked into the issue more it looks like changing addition and subtraction [which I was wrong about already supporting this] would be too invasive for this concern to justify adding full "exact if possible" support to decimal, but I do think it might be worthwhile to add a lossless decimal equivalent to frexp/ldexp. On Thu, Oct 8, 2020, at 12:40, Steven D'Aprano wrote:
Related: why doesn't Decimal support direct conversion from Fraction?
That's what I wanted to know  I noticed that it didn't when I was trying to figure out what Guido meant when he alluded to some issue with Decimal in the SupportsString thread, and figured it should be easy enough to add support, then ran into a wall [because I wasn't aware the constructor supported E notation].
[Random832
I was making a "convert Fraction to Decimal, exactly if possible" function and ran into a wall: it's not possible to do some of the necessary operations with exact precision in decimal:
 multiplication  division where the result can be represented exactly [the divisor is an integer whose prime factors are only two and five, or a rational number whose numerator qualifies]
Without you supplying careful concrete examples, I really have no idea what you're looking for.
I assume there's some way around it that I haven't spent enough time to figure out [create a temporary context with sufficint digits for multiplication, and work out the reciprocal power of 10 by hand to use this multiplication to implement division],
Integers have unbounded precision, so stick to those when part of a computation needs to be unbounded. If by "Decimal" you mean Python's "decimal.Decimal" class, the constructor ignores the context precision, and retains all the info passed to it, So there's no need at all to change Decimal precision. Here's a guess at what you want: def frac2dec(f): from decimal import Decimal as D n, d = f.numerator, f.denominator if not n: return D(0) if d == 1: return D(n) e2 = e5 = 0 while d % 2 == 0: e2 += 1 d //= 2 while d % 5 == 0: e5 += 1 d //= 5 if d != 1: raise ValueError("exact conversion not possible", f) n *= (2 if e2 <= e5 else 5) ** abs(e5  e2) return D(f"{n}E{max(e2, e5)}") Then, e.g., >>> frac2dec(Fraction(12311111111111111111111111111111111, 1000)) Decimal('12311111111111111111111111111111.111') >>> frac2dec(Fraction(12311111111111111111111111111111111, 500)) Decimal('24622222222222222222222222222222.222') >>> frac2dec(Fraction(12311111111111111111111111111111111, 200)) Decimal('61555555555555555555555555555555.555') Or Chris Angelico's example, but it's not really instructive because Decimal's default precision is enough for exact calculation in this specific case: >>> frac2dec(Fraction(314159265358979323, 2**53 * 5**47)) Decimal('4.908738521234051921875E32') However, you get the same result if you set Decimal's precision to, e.g., 1 first: >>> import decimal >>> decimal.getcontext().prec = 1 >>> frac2dec(Fraction(314159265358979323, 2**53 * 5**47)) Decimal('4.908738521234051921875E32')
but I feel like these exact operations should be supported in the standard library.
Since I really don't know what you want, can't say  but _doubt_ any plausible way of fleshing it out is useful enough to warrant inclusion in the core.
On Thu, Oct 8, 2020, at 13:17, Tim Peters wrote:
[Random832
] I was making a "convert Fraction to Decimal, exactly if possible" function and ran into a wall: it's not possible to do some of the necessary operations with exact precision in decimal:
 multiplication  division where the result can be represented exactly [the divisor is an integer whose prime factors are only two and five, or a rational number whose numerator qualifies]
Without you supplying careful concrete examples, I really have no idea what you're looking for.
I don't understand what's unclear  I was suggesting there should be an easy way to have the exact result for all operations on Decimal operands that have exact results representable as Decimal. I did misunderstand part of the documentation, though... the documentation claims "Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. ", and I assumed without checking that this applied even if the size of the operands was greater than the context precision, and therefore that addition and subtraction were always exact. This is why my post was so focused on multiplication and division. Sine correcting that misunderstanding, I've looked over the implementation and I no longer think this is worth adding without a concrete use case, since supporting adding very large and very small numbers together would introduce more inefficiencies than it's worth. Incidentally, I also noticed the procedure suggested by the documentation for doing fixed point arithmetic can result in incorrect double rounding in some situations:
(D('1.01')*D('1.46')).quantize(TWOPLACES) # true result is 1.4746 Decimal('1.48')
Integers have unbounded precision, so stick to those when part of a computation needs to be unbounded. If by "Decimal" you mean Python's "decimal.Decimal" class, the constructor ignores the context precision, and retains all the info passed to it, So there's no need at all to change Decimal precision.
This only applies to the constructor though, not the arithmetic operators.
Here's a guess at what you want:
That works for the specific use case of converting Fraction to Decimal I didn't know that Decimal supported E notation in the constructor, so I thought I would have to multiply or divide by a power of ten directly [which would therefore have to be rounded]... going through the string constructor seems extremely inefficient and inelegant, though, I'd like a way to losslessly multiply or divide a decimal by a power of ten at least... a sort of decimal equivalent to ldexp. And I guess for the other operations it's possible to work around by doing the operations in Fraction and converting to Decimal in the end  that also seems inelegant, but oh well.
[Random832]
I don't understand what's unclear 
Obviously ;) But without carefully constructed concrete use cases, what you say continues to remain unclear to me.
I was suggesting there should be an easy way to have the exact result for all operations on Decimal operands that have exact results representable as Decimal.
See above. It's very much a feature of the IEEE 754/854 standards (of which family Python's decimal model is a part) that if an exact result is exactly representable, then that's the result you get. But that's apparently not what you're asking for.
... Incidentally, I also noticed the procedure suggested by the documentation for doing fixed point arithmetic can result in incorrect double rounding in some situations:
(D('1.01')*D('1.46')).quantize(TWOPLACES) # true result is 1.4746 Decimal('1.48')
Are you using defaults, or did you change something? Here under 3.9.0, showing everything: >>> from decimal import Decimal as D >>> TWOPLACES = D('0.01') >>> (D('1.01')*D('1.46')).quantize(TWOPLACES) Decimal('1.47') Different result than you showed.
If by "Decimal" you mean Python's "decimal.Decimal" class, the constructor ignores the context precision, and retains all the info passed to it, So there's no need at all to change Decimal precision.
This only applies to the constructor though, not the arithmetic operators.
Which is why I said "the constructor" ;)
Here's a guess at what you want:
That works for the specific use case of converting Fraction to Decimal
That's the only use case your original post mentioned.
I didn't know that Decimal supported E notation in the constructor, so I thought I would have to multiply or divide by a power of ten directly [which would therefore have to be rounded]... going through the string constructor seems extremely inefficient and inelegant, though,
Not at all! Conversion between decimal strings and decimal.Decimal objects is close to trivial. It's straightforward and lineartime (in the number of significant digits). It's converting between decimal strings and binarybased representations (float or int) that's hairy and potentially slow. And there is no rounding needed for any powerof10 input, regardless of context precision, provided only you don't over/underflow the exponent range.
I'd like a way to losslessly multiply or divide a decimal by a power of ten at least... a sort of decimal equivalent to ldexp.
Again, without concrete examples, that's clear as mud to me. The power of 10 (ignoring over/underflow) can always be gotten exactly. If you go on to multiply/divide by that, _then_ rounding can occur, but only if the other operand has more significant digits than the current context precision allows for. But if you can't live with that, you don't want the `decimal` module _at all_. The entire model, from the ground up, is usersettable but _fixed_ working precision. Nevertheless, you can get that specific effect via passing a tuple to the constructor:
d = D('1.2345') d Decimal('1.2345') import decimal decimal.getcontext().prec = 2 d Decimal('1.2345') d * 1000000 # multiplying rounds to context precision Decimal('1.2E+6') d.as_tuple() DecimalTuple(sign=0, digits=(1, 2, 3, 4, 5), exponent=4) D((_.sign, _.digits, _.exponent + 6)) # constructor does not round Decimal('1.2345E+6')
So that's how to implement a ldexpalike (although I doubt there's much real use for such a thing): convert the input to a tuple, add the exponent delta to the tuple's exponent field, and give it back to the constructor again. Why I say I doubt there's much real use: if you create, by _any_ means, a Decimal with more significant digits than current working precision, the "extra" digits will just get rounded away by any operation at all. For example, following the last line of the example above, even by unary plus:
+_ Decimal('1.2E+6')
Again, the concept of a _fixed_ (albeit usersettable) working precision is deep in the module's bones.
On Thu, Oct 8, 2020, at 16:07, Tim Peters wrote:
See above. It's very much a feature of the IEEE 754/854 standards (of which family Python's decimal model is a part) that if an exact result is exactly representable, then that's the result you get.
My suggestion was for a way to make it so that if an exact result is exactly representable at any precision you get that result, with rounding only applied for results that cannot be represented exactly regardless of precision. It seems like this is incompatible with the design of the decimal module, but it's ***absolutely*** untrue that "if an exact result is exactly representable, then that's the result you get", because *the precision is not part of the representation format*. What threw me off is the fact that there is a single type that represents an unlimited number of digits, rather than separate types for each precision. I don't think that feature is shared with IEEE, and it creates a philosophical question of interpretation [the answer of which we clearly disagree on] of what it means for a result to be "exactly representable", which doesn't exist with the IEEE fixedlength formats. At this point, doing the math in Fractions and converting back and forth to Decimal as necessary is probably good enough, though.
But that's apparently not what you're asking for.
... Incidentally, I also noticed the procedure suggested by the documentation for doing fixed point arithmetic can result in incorrect double rounding in some situations:
(D('1.01')*D('1.46')).quantize(TWOPLACES) # true result is 1.4746 Decimal('1.48')
Are you using defaults, or did you change something? Here under 3.9.0, showing everything:
This was with the precision set to 4, I forgot to include that. With default precision the same principle applies but needs much longer operands to demonstrate. The issue is when there is a true result that the last digit within the context precision is 4, the one after it is 6, 7, 8, or 9, and the one before it is odd. The 4 is rounded up to 5, and that 5 is used to round up the previous digit. Here's an example with more digits  it's easy enough to generalize.
ctx.prec=28 (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001')) Decimal('1.49999999999998') ctx.prec=29 (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001')) Decimal('1.49999999999997')
The true result of the multiplication here is 1.4999999999999749999999999996, which requires 29 digits of precision [and, no, it's not just values that look like 999999 and 000001, but a brute force search takes much longer for 15digit operands than 3digit ones]
I didn't know that Decimal supported E notation in the constructor, so I thought I would have to multiply or divide by a power of ten directly [which would therefore have to be rounded]... going through the string constructor seems extremely inefficient and inelegant, though,
Not at all! Conversion between decimal strings and decimal.Decimal objects is close to trivial. It's straightforward and lineartime (in the number of significant digits).
I think for some reason I'd assumed the mantissa was represented as a binary number, since the .NET decimal format [which isn't arbitraryprecision] does that I should probably have looked over the implementation more before jumping in.
I'd like a way to losslessly multiply or divide a decimal by a power of ten at least... a sort of decimal equivalent to ldexp.
Again, without concrete examples, that's clear as mud to me.
Er, in this case the conversion of fraction to decimal *is* the concrete example, it's a oneforone substitution for the use of the string constructor: ldexp(n, max(e2, e5)) in place of D(f"{n}E{max(e2, e5}").
[Random832
My suggestion was for a way to make it so that if an exact result is exactly representable at any precision you get that result, with rounding only applied for results that cannot be represented exactly regardless of precision.
That may have been the suggestion in your head ;) , but  trust me on this  it's taken a long time to guess that from what you wrote. Again, you could have saved us all a world of troubles by giving concrete examples. What you wrote just now adds another twist: apparently you DO want rounding in some cases ("results that cannot be represented exactly regardless of precision"). The frac2dec function I suggested explicitly raised a ValueError in that case instead, and you said at the time that function would do what you wanted.
It seems like this is incompatible with the design of the decimal module, but it's ***absolutely*** untrue that "if an exact result is exactly representable, then that's the result you get", because *the precision is not part of the representation format*.
Precision certainly is part of the model in the standards the decimal module implements. The decimal module slightly extends the standards by precisely defining what happens for basic operations when mixing operands of different precisions: the result is as if computed to infinite precision, then rounded once at the end according to the context rounding mode, to the context precision.
What threw me off is the fact that there is a single type that represents an unlimited number of digits, rather than separate types for each precision. I don't think that feature is shared with IEEE,
RIght, the standards don't define mixedprecision arithmetic, but `decimal` does.
and it creates a philosophical question of interpretation [the answer of which we clearly disagree on] of what it means for a result to be "exactly representable", which doesn't exist with the IEEE fixedlength formats.
No, it does: for virtually every operation, apart from the constructor, "exactly representable" refers to the context's precision setting. If you don't believe that, see how and when the "inexact" flag gets set. It's the very meaning of the "inexact" flag that "the infinitely precise result was not exactly representable (i.e., rounding lost some information)". It's impossible to divorce the meaning of "inexact" from the context precision.
At this point, doing the math in Fractions and converting back and forth to Decimal as necessary is probably good enough, though.
I still don't get it. The frac2dec function I wrote appeared to me to be 99.9% useless, since, as already explained, there's nothing you can _do_ with the result that doesn't risk rounding away its digits. For that reason, this feels like an extended "XY problem" to me.
Incidentally, I also noticed the procedure suggested by the documentation for doing fixed point arithmetic can result in incorrect double rounding in some situations:
(D('1.01')*D('1.46')).quantize(TWOPLACES) # true result is 1.4746 Decimal('1.48')
Are you using defaults, or did you change something? Here under 3.9.0, showing everything:
This was with the precision set to 4, I forgot to include that.
A rather consequential omission ;)
With default precision the same principle applies but needs much longer operands to demonstrate. The issue is when there is a true result that the last digit within the context precision is 4, the one after it is 6, 7, 8, or 9, and the one before it is odd. The 4 is rounded up to 5, and that 5 is used to round up the previous digit.
Here's an example with more digits  it's easy enough to generalize.
ctx.prec=28 (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001')) Decimal('1.49999999999998') ctx.prec=29 (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001')) Decimal('1.49999999999997')
The true result of the multiplication here is 1.4999999999999749999999999996, which requires 29 digits of precision
[and, no, it's not just values that look like 999999 and 000001, but a brute force search takes much longer for 15digit operands than 3digit ones]
Understood. The docs are only answering "Once I have valid two place inputs, how do I maintain that invariant throughout an application?". They don't mention double rounding, and I don't know whether the doc author was even aware of the issue. I argued with Mike Cowlishaw about it when the decimal spec was first being written, pushing back against his claim that it naturally supported fixedpoint (as well as floatingpoint) decimal arithmetic. Precisely because of possible "double rounding" surprises when trying to use the spec to _emulate_ fixed point. You can worm around it by using "enough" extra precision, but I don't recall the precise bounds needed. For binary arithmetic, and +  * /, if you compute to p bits first and then round again back to q bits, provided p >= 2*q+2 you always get the same result as if you had rounded the infinitely precise result to q bits directly. But the decimal spec takes a different approach, which Python's docs don't explain at all: the otherwisemysterious ROUND_05UP rounding mode. Quoting from the spec: http://speleotrove.com/decimal/damodel.html ... The rounding mode round05up permits arithmetic at shorter lengths to be emulated in a fixedprecision environment without double rounding. For example, a multiplication at a precision of 9 can be effected by carrying out the multiplication at (say) 16 digits using round05up and then rounding to the required length using the desired rounding algorithm. In your original example, 1.01 * 1.46 rounds to 4digit 1.474 under ROUND_05UP. and then `quantize()` can be used to round that back to 1, 2, or 3 digits under any rounding mode you like. Or, with your last example,
with decimal.localcontext() as ctx: ... ctx.rounding = decimal.ROUND_05UP ... r = D('1.00000000000001')*D('1.49999999999996') r Decimal('1.499999999999974999999999999') r.quantize(D('.00000000000001')) Decimal('1.49999999999997')
... I think for some reason I'd assumed the mantissa was represented as a binary number, since the .NET decimal format [which isn't arbitraryprecision] does that I should probably have looked over the implementation more before jumping in.
As I recall, the purePython implementation used Python ints for mantissas. Last I looked, libmpdec uses a vector of 64bit (C) ints, effectively using base 10**19 (each 64bit int is "a digit" in range(10**19)).
I'd like a way to losslessly multiply or divide a decimal by a power of ten at least... a sort of decimal equivalent to ldexp.
Again, without concrete examples, that's clear as mud to me.
Er, in this case the conversion of fraction to decimal *is* the concrete example, it's a oneforone substitution for the use of the string constructor: ldexp(n, max(e2, e5)) in place of D(f"{n}E{max(e2, e5}").
OK. Note that the _actual_ spelling of ldexp in the decimal module is "scaleb". But, like virtually all other operations, scaleb() rounds back to context precision.
On Fri, 9 Oct 2020 at 23:41, Tim Peters
But the decimal spec takes a different approach, which Python's docs don't explain at all: the otherwisemysterious ROUND_05UP rounding mode. Quoting from the spec:
http://speleotrove.com/decimal/damodel.html ... The rounding mode round05up permits arithmetic at shorter lengths to be emulated in a fixedprecision environment without double rounding. For example, a multiplication at a precision of 9 can be effected by carrying out the multiplication at (say) 16 digits using round05up and then rounding to the required length using the desired rounding algorithm.
In your original example, 1.01 * 1.46 rounds to 4digit 1.474 under ROUND_05UP. and then `quantize()` can be used to round that back to 1, 2, or 3 digits under any rounding mode you like.
Or, with your last example,
with decimal.localcontext() as ctx: ... ctx.rounding = decimal.ROUND_05UP ... r = D('1.00000000000001')*D('1.49999999999996') r Decimal('1.499999999999974999999999999') r.quantize(D('.00000000000001')) Decimal('1.49999999999997')
And can't be this the default of decimal? For what I know, this is the default of BigDecimal in Java:
public BigDecimal multiply(BigDecimal multiplicand)
Returns a BigDecimal whose value is (this × multiplicand), and whose scale is (this.scale() + multiplicand.scale()). Parameters:multiplicand  value to be multiplied by this BigDecimal.Returns:this * multiplicand
public BigDecimal multiply(BigDecimal multiplicand, MathContext mc)
Returns a BigDecimal whose value is (this × multiplicand), with rounding according to the context settings.
Example online: http://tpcg.io/5axMxUQb
[Tim]
But the decimal spec takes a different approach, which Python's docs don't explain at all: the otherwisemysterious ROUND_05UP rounding mode. Quoting from the spec:
http://speleotrove.com/decimal/damodel.html ... The rounding mode round05up permits arithmetic at shorter lengths to be emulated in a fixedprecision environment without double rounding. For example, a multiplication at a precision of 9 can be effected by carrying out the multiplication at (say) 16 digits using round05up and then rounding to the required length using the desired rounding algorithm.
In your original example, 1.01 * 1.46 rounds to 4digit 1.474 under ROUND_05UP. and then `quantize()` can be used to round that back to 1, 2, or 3 digits under any rounding mode you like.
Or, with your last example,
with decimal.localcontext() as ctx: ... ctx.rounding = decimal.ROUND_05UP ... r = D('1.00000000000001')*D('1.49999999999996') r Decimal('1.499999999999974999999999999') r.quantize(D('.00000000000001')) Decimal('1.49999999999997')
[Marco Sulla
And can't be this the default of decimal?
Try to spell out what you mean  precisely!  by "this". I can't do that for you. For any plausible way of fleshing it out I've thought of, the answer is "no". That's quite beyond that we don't have a blank slate: at this point, _nothing_ about the default behavior of `decimal` can be changed without breaking mounds of code. Won't happen.
For what I know, this is the default of BigDecimal in Java:
Again, I don't know what "this" means.
... Example online: http://tpcg.io/5axMxUQb
That example's output is radically different than in the example you quoted: the result displayed is: 1.4999999999999749999999999996 not the 1.499999999999974999999999999 in the example you quoted. BigDecimal, by default, produces results with an unbounded number of significand digits. 29 in this specific case. The entire point of the original example is that Python's `decimal` defaults to 28 maximum, so needs to round away the trailing "6" from the infinitely precise result. That can lead to "double rounding" errors when that's rounded back again. The cleverness of ROUND_05UP  which appears to be senseless at first sight  is that it manages to use the last retained digit to _encode_ enough information about the infinitely precise result that a second rounding to a narrower precision gives exactly the same result as if it were given the infinitely precise result to work on. Regardless of whether the second rounding is to nearest/even, halfup, toward 0, toplusinfinity, .... It even has exactly the same effects on the inexact flag. Its only purpose is to eliminate "double rounding" errors  as a standalone rounding mode, ROUND05UP is worse than useless (it _appears_ to be a bizarre mix of roundtoward0 and roundawayfrom0). BigDecimal is most naturally suited to fixed point. The number of significand digits varies dynamically, without upper bound, to try to preserve (via a maze of rules) the position of the decimal point as a function of operations' inputs' decimal point locations. It can be used to emulate floating point, but that requires passing MathContext objects too, over & over & over, to keep rounding away unwanted precision. Python's decimal is most naturally suited to floating point. The maximum number of significand digits is fixed (although usersettable), and the scale factor ("exponent") varies dynamically to keep the number of significand digits in bounds. It can be used to emulate fixed point, but that requires doing other stuff over & over & over, to keep forcing the decimal point back to the fixed location the user has in mind. The closest you can get to BigDecimal's behavior "by magic" in Python is to set the context precision to its maximum allowed value. But very few people would actually want that! Behold:
import decimal decimal.Decimal(1) / decimal.Decimal(3) Decimal('0.3333333333333333333333333333')
That doesn't surprise anyone. But this would:
decimal.getcontext().prec = decimal.MAX_PREC decimal.Decimal(1) / decimal.Decimal(3) Traceback (most recent call last): ... MemoryError
That's because:
decimal.MAX_PREC 999999999999999999
is huge on a 64bit box, and nobody has that much RAM. BigDecimal isn't actually better in this respect: try the similar thing in Java, andi it throws java.lang.ArithmeticException, with detail "Nonterminating decimal expansion; no exact representable decimal result.". Approximately nobody wants that as a default behavior. That's why when you see actual Java code doing divisions with BigDecimal, they almost always use the overload that requires passing an explicit MathContext object too, to force a small maximum on the number of significand digit\s that will be retained.
On Sat, 10 Oct 2020 at 19:28, Tim Peters
Try to spell out what you mean  precisely!  by "this". I can't do that for you. For any plausible way of fleshing it out I've thought of, the answer is "no".
Well, please, don't be so harsh. I'm trying to discuss to someone that cocreated Python itself, it's not simple to me :P
The closest you can get to BigDecimal's behavior "by magic" in Python is to set the context precision to its maximum allowed value.
I think there's another "trick" to get the BigDecimal behaviour. If you read the Javadoc, it says that each operation has a default precision. For example, multiplication a*b has precision = a_scale + b_scale. So, in reality, also BigDecimal has a context with finite precision. The difference is that the default context has a variable precision, depending on the operation. Could Python decimal have something similar, maybe by setting prec = 1?
[Tim]
Try to spell out what you mean  precisely!  by "this". I can't do that for you. For any plausible way of fleshing it out I've thought of, the answer is "no".
[Marco Sulla
Well, please, don't be so harsh. I'm trying to discuss to someone that cocreated Python itself, it's not simple to me :P
Sorry, I didn't mean to come off as harsh. I did mean to come off as saying that details are important here  indeed, details are darned near everything here. I'm not going to discuss vague wishlists in this area.
The closest you can get to BigDecimal's behavior "by magic" in Python is to set the context precision to its maximum allowed value.
I think there's another "trick" to get the BigDecimal behaviour. If you read the Javadoc, it says that each operation has a default precision. For example, multiplication a*b has precision = a_scale + b_scale. So, in reality, also BigDecimal has a context with finite precision. The difference is that the default context has a variable precision, depending on the operation.
Scale is the conceptual exponent in BigDecimal  it has nothing to do with digits of precision. For example, every normalized odd BigDecimal integer has scale 0, regardless of whether the integer is 1, or 17 to the millionth power. A BigDecimal with significand `sig` (an integer) and scale `s` (also an integer) represents the real number sig * 10**s. That's where the motivation for the rule you paraphrased came from: mathematically, (a_sig * 10**a_s) * (b_sig * 10**b_s) = (a_sig * b_sig) * 10**(a_s + b_s) So "the natural" scale of the product is the sum of their scales (a_s + b_s), but the significand of the product is the infiniteprecision product of their signficands, no matter how many bits that takes. For example, try this: BigDecimal a = new BigDecimal("500").multiply(new BigDecimal("100000000")); System.out.println(a); System.out.println(a.scale()); System.out.println(a.precision()); The output: 50000000000 0 11 The precision is 11 (decimal) digits, but the scale is 0. Change it to: new BigDecimal("5e2").multiply(new BigDecimal("1e8")) and the output changes to 5E+10 10 1
Could Python decimal have something similar, maybe by setting prec = 1?
No ;)
Arbitraryprecision multipleprecision floats in Python: mpmath, gmpy, sympy .evalf() / N() https://en.wikipedia.org/wiki/Arbitraryprecision_arithmetic  https://github.com/fredrikjohansson/mpmath > Python library for arbitraryprecision floatingpoint arithmetic  docs: http://mpmath.org/doc/current/  mpmath is used by SymPy and Sage  mpmath uses gmpy if it's installed (otherwise Python ints)  https://github.com/aleaxit/gmpy > General MultiPrecision arithmetic for Python 2.6+/3+ (GMP, MPIR, MPFR, MPC)  docs: https://gmpy2.readthedocs.io/en/latest/  Integers, Rationals, Reals, Complex https://docs.sympy.org/latest/modules/evalf.html : > Exact SymPy expressions can be converted to floatingpoint approximations (decimal numbers) using either the .evalf() method or the N() function. > [...] > By default, numerical evaluation is performed to an accuracy of 15 decimal digits. You can optionally pass a desired accuracy (which should be a positive integer) as an argument to evalf or N: >>> N(sqrt(2)*pi, 5) 4.4429 >>> N(sqrt(2)*pi, 50) 4.4428829381583662470158809900606936986146216893757 On Sat, Oct 10, 2020 at 3:34 PM Marco Sullawrote: > On Sat, 10 Oct 2020 at 19:28, Tim Peters wrote: > > Try to spell out what you mean  precisely!  by "this". I can't do > > that for you. For any plausible way of fleshing it out I've thought > > of, the answer is "no". > > Well, please, don't be so harsh. I'm trying to discuss to someone that > cocreated Python itself, it's not simple to me :P > > > The closest you can get to BigDecimal's behavior "by magic" in Python > > is to set the context precision to its maximum allowed value. > > I think there's another "trick" to get the BigDecimal behaviour. > If you read the Javadoc, it says that each operation has a default > precision. For example, multiplication a*b has precision = a_scale + > b_scale. So, in reality, also BigDecimal has a context with finite > precision. The difference is that the default context has a variable > precision, depending on the operation. > > Could Python decimal have something similar, maybe by setting prec = 1? > _______________________________________________ > Pythonideas mailing list  pythonideas@python.org > To unsubscribe send an email to pythonideasleave@python.org > https://mail.python.org/mailman3/lists/pythonideas.python.org/ > Message archived at > https://mail.python.org/archives/list/pythonideas@python.org/message/O7KUARDPTMJ36FEYJTKGEUPHFG3A5BOG/ > Code of Conduct: http://python.org/psf/codeofconduct/ >
On Sun, 11 Oct 2020 at 19:53, Wes Turner
Arbitraryprecision multipleprecision floats in Python: mpmath, gmpy, sympy .evalf() / N()
mpmath has always a global precision: http://mpmath.org/doc/current/basics.html#settingtheprecision About SymPy, I worked a little with it within Sage, and it was really amazing, but I think it's too much for the current goal.
On Fri, Oct 9, 2020, at 17:38, Tim Peters wrote:
[Random832
] My suggestion was for a way to make it so that if an exact result is exactly representable at any precision you get that result, with rounding only applied for results that cannot be represented exactly regardless of precision.
That may have been the suggestion in your head ;) , but  trust me on this  it's taken a long time to guess that from what you wrote. Again, you could have saved us all a world of troubles by giving concrete examples.
The problem is that the issue wasn't so much a concrete use case [other than the one thing with fraction, which doesn't exercise nearly all of the things that I had been suggesting changes for], but a general sense of unease with the philosophy of the decimal module  the fact that people are willing to say things like "if an exact result is exactly representable, then that's the result you get" when that's not actually true, because they've twisted the meaning of "exactly representable" up in knots. And the documentation isn't very up front about this  for example where it talks about how significant figures work for multiplication, it doesn't talk about the context rounding at the end. It says "For multiplication, the “schoolbook” approach uses all the figures in the multiplicands. For instance, 1.3 * 1.2 gives 1.56 while 1.30 * 1.20 gives 1.5600.",  this would be a good time to mention that it will be rounded if the context precision is less than 5. It was omissions like these that led me to believe that addition and subtraction *already* behaved as I was suggesting when I first posted this thread.
What you wrote just now adds another twist: apparently you DO want rounding in some cases ("results that cannot be represented exactly regardless of precision"). The frac2dec function I suggested explicitly raised a ValueError in that case instead, and you said at the time that function would do what you wanted.
It seems like this is incompatible with the design of the decimal module, but it's ***absolutely*** untrue that "if an exact result is exactly representable, then that's the result you get", because *the precision is not part of the representation format*.
Precision certainly is part of the model in the standards the decimal module implements.
But the context precision is not part of the actual data format of a decimal number. The number Decimal('1.23456789') is an identical object no matter what the context precision is, even if that precision is less than 9. Even though it's part of the *model* used for calculations, it's not relevant to the *representation*, so it has no effect on the set of values that are exactly representable. So, "if the exact result is exactly representable that's what you get" is plainly false.
No, it does: for virtually every operation, apart from the constructor, "exactly representable" refers to the context's precision setting.
That is nonsense. "exactly representable" is a plain english phrase and has a clear meaning that only involves the actual data format, not the context.
If you don't believe that, see how and when the "inexact" flag gets set. It's the very meaning of the "inexact" flag that "the infinitely precise result was not exactly representable (i.e., rounding lost some information)".
Nonsense. The meaning of the inexact flag means that the module *chose* to discard some information. There is no actual limit [well, except for much larger limits related to memory allocation and array indexing] to the number of digits that are *representable*, the context precision is an external limit that is not part of the *representation* itself.
[Random832
... That is nonsense. "exactly representable" is a plain english phrase and has a clear meaning that only involves the actual data format, not the context.
The `decimal` module implements a very exacting standard, and the words it uses have technical meanings precisely defined by the standard. The intended meanings are what I've said they are. It's futile to insist that those match your "plain English" meanings. If you believe the docs should be clearer about that, go for it. But, no, they won't be changed to use technical phrases in ways that contradict what the standard intends by them. For example,
Even though it's part of the *model* used for calculations, it's not relevant to the *representation*, so it has no effect on the set of values that are exactly representable. So, "if the exact result is exactly representable that's what you get" is plainly false.
No, under the standard it's plainly true. The key word is "result". The _result_ of virtually decimal operation is required to round to context precision. It's your harping on "representation" that's irrelevant to this. Else, as already explained, the meaning of the "inexact" flag/signal would be incomprehensible.
Nonsense. The meaning of the inexact flag means that the module *chose* to discard some information.
No "choice" involved: under the standard it's mandatory. If you want decimal arithmetic following other rules, that's fine, but then you're not talking about Python's `decimal` module anymore. And that's the last I have to say about this.
On Wed, Oct 14, 2020 at 03:33:22PM 0400, Random832 wrote:
That is nonsense. "exactly representable" is a plain english phrase and has a clear meaning that only involves the actual data format, not the context.
Perhaps your understanding of plain English is radically different from mine, but I don't understand how that can be. The actual data format has some humongeous limits (which may or may not be reachable in practice, due to memory constraints). It is obvious to me that "exactly representable" must take into account the current context:  If it didn't, then the context precision would be meaningless; changing it wouldn't actually change the precision of calculations.  Decimal(1)/3 is Decimal('0.3333333333333333333333333333') by default, a mere 28 digits, not MAX_PREC (999999999999999999) digits.
Nonsense. The meaning of the inexact flag means that the module *chose* to discard some information.
Is it truly a choice, if the module has no choice? Do you expect the decimal module to return an exact result for 1/3 or sqrt(2)? I think that you are just surprised that decimal rounds operations to the current context, not to the implied precision of the operands. My guess is that you were expecting arithmetic on operands to honour their precision: getcontext().prec = 3 Decimal('0.111111111111') + Decimal('0.222222') # expect Decimal('0.333333111111') # but get Decimal('0.333') but (aside from "wishful thinking") I can't see why you expected that. (To be honest, I was surprised to learn that the context precision is ignored when creating a Decimal from a string. I still find it a bit odd that if I set the precision to 3, I can still create a Decimal with twelve digits.)  Steve
On Thu, Oct 15, 2020 at 11:18 AM Steven D'Aprano
On Wed, Oct 14, 2020 at 03:33:22PM 0400, Random832 wrote:
That is nonsense. "exactly representable" is a plain english phrase and has a clear meaning that only involves the actual data format, not the context.
Perhaps your understanding of plain English is radically different from mine, but I don't understand how that can be.
The actual data format has some humongeous limits (which may or may not be reachable in practice, due to memory constraints). It is obvious to me that "exactly representable" must take into account the current context:
 If it didn't, then the context precision would be meaningless; changing it wouldn't actually change the precision of calculations.
 Decimal(1)/3 is Decimal('0.3333333333333333333333333333') by default, a mere 28 digits, not MAX_PREC (999999999999999999) digits.
Neither 1/3 nor sqrt(2) can be *exactly represented* as a decimal fraction. It doesn't matter what the precision is set to, there is absolutely no way that they can be perfectly represented (other than symbolically or as a fraction or something, which the decimal module doesn't do). OTOH, 2**X * 5**Y * Z can be exactly represented, for any integers X, Y, and Z; but the precision required might exceed the module's limits. I don't really understand your complaint here. The plain English interpretation of "exactly representable" is, within margin of error, a perfectly representable concept. (The "margin of error" here is that, barring infinite RAM, there will always be *some* limit to the precision stored.)
(To be honest, I was surprised to learn that the context precision is ignored when creating a Decimal from a string. I still find it a bit odd that if I set the precision to 3, I can still create a Decimal with twelve digits.)
The context is used for arithmetic, not construction. All that being said, though, I still don't think the Decimal module needs an option to go for infinite precision even if it can be truly exact. The potential for an unexpected performance hit is too high, and the temptation to set this flag would also be very high. (Imagine the Stack Overflow answers: "yeah, the decimal module is inaccurate by default, just set this and it becomes accurate".) ChrisA
On Thu, Oct 15, 2020 at 11:28:36AM +1100, Chris Angelico wrote:
Neither 1/3 nor sqrt(2) can be *exactly represented* as a decimal fraction.
Indeed, I am very aware of that, and in fact they were precisely the examples I gave to question Random's assertion that inexact rounding is something that the module chooses to do.
I don't really understand your complaint here. The plain English interpretation of "exactly representable" is, within margin of error, a perfectly representable concept. (The "margin of error" here is that, barring infinite RAM, there will always be *some* limit to the precision stored.)
It's not *my* complaint, I'm not making any complaints! Random is, or was, complaining about the way Tim was using "exactly representable". As far as I can tell, Tim, you and me all agree that exactly representable for Decimal should be read as "within the current context's precision", but Random apparently disagrees. I'm not quite sure what he thinks it should be read as, but I guess it might be related to honouring the implicit precision of the Decimal object itself.
(To be honest, I was surprised to learn that the context precision is ignored when creating a Decimal from a string. I still find it a bit odd that if I set the precision to 3, I can still create a Decimal with twelve digits.)
The context is used for arithmetic, not construction.
Ack. I'm not saying it's wrong to do so, only that it surprised me when I first discovered it.  Steve
This all seems a pretty artificial argument.
In plain English, "1/3" is not exactly representable in decimal form, but
something like 0.1111111111111111111111111111 *
0.22222222222222222222222222 *is* (assuming the inputs are what they
appear).
Since the spec clearly means to say "exactly representable in the (current
or given) context's precision" that phrase (or something nonempty like it)
should just be added to the docs and the argument is over.
This is not the only place where Python's docs are vague even though the
code is careful (e.g. the docs for 'typing' and 'asyncio' could use work,
to mention some that I personally care about). Most places that take
strings are unclear about whether they also take bytes. Some places that
take files are unclear about whether they take file names or file objects
(a.k.a. streams). And so on.
On Wed, Oct 14, 2020 at 6:20 PM Steven D'Aprano
On Thu, Oct 15, 2020 at 11:28:36AM +1100, Chris Angelico wrote:
Neither 1/3 nor sqrt(2) can be *exactly represented* as a decimal fraction.
Indeed, I am very aware of that, and in fact they were precisely the examples I gave to question Random's assertion that inexact rounding is something that the module chooses to do.
I don't really understand your complaint here. The plain English interpretation of "exactly representable" is, within margin of error, a perfectly representable concept. (The "margin of error" here is that, barring infinite RAM, there will always be *some* limit to the precision stored.)
It's not *my* complaint, I'm not making any complaints! Random is, or was, complaining about the way Tim was using "exactly representable". As far as I can tell, Tim, you and me all agree that exactly representable for Decimal should be read as "within the current context's precision", but Random apparently disagrees. I'm not quite sure what he thinks it should be read as, but I guess it might be related to honouring the implicit precision of the Decimal object itself.
(To be honest, I was surprised to learn that the context precision is ignored when creating a Decimal from a string. I still find it a bit odd that if I set the precision to 3, I can still create a Decimal with twelve digits.)
The context is used for arithmetic, not construction.
Ack. I'm not saying it's wrong to do so, only that it surprised me when I first discovered it.
 Steve _______________________________________________ Pythonideas mailing list  pythonideas@python.org To unsubscribe send an email to pythonideasleave@python.org https://mail.python.org/mailman3/lists/pythonideas.python.org/ Message archived at https://mail.python.org/archives/list/pythonideas@python.org/message/M57JD6... Code of Conduct: http://python.org/psf/codeofconduct/
 Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* http://feministing.com/2015/02/03/howusingtheyasasingularpronouncanc...
On Thu, 15 Oct 2020 at 04:22, Guido van Rossum
This all seems a pretty artificial argument.
In plain English, "1/3" is not exactly representable in decimal form, but something like 0.1111111111111111111111111111 * 0.22222222222222222222222222 *is* (assuming the inputs are what they appear).
Since the spec clearly means to say "exactly representable in the (current or given) context's precision" that phrase (or something nonempty like it) should just be added to the docs and the argument is over.
Maybe the argument isn't productive but I think that there is a reasonable feature request in this. The decimal module currently has a way to trap whenever something inexact occurs e.g.:
from decimal import * getcontext().traps[Inexact] = True Decimal('1') / Decimal('3') Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.Inexact: [
]
That's useful for ensuring that a calculation cannot silently propagate rounding errors: if it succeeds the result is exact. In the case of 1/3 no finite level of precision can give an exact result but in other cases it is possible that increasing the precision would:
d = Decimal('1' * 20) d*d Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.Inexact: [
] getcontext().prec = 50 d*d Decimal('123456790123456790120987654320987654321')
The useful feature that is not available from the decimal module is a way to say: use enough digits to give an exact result if at all possible but otherwise raise an error (or perhaps round the result depending on traps). There are a couple of obvious strategies for trying to find that number of digits that have been suggested above in this thread. One approach is to trap the inexact flag and increase the precision if needed: while True: try: calculation() except Inexact: getcontext().prec *= 2 The problem is that for something like 1/3 this is an infinite loop that will consume all the memory in the machine. Hopefully you'll get something like MemoryError but this can also just bork everything. Of course we can put limits on the loop but we don't actually need a high precision to conclude that an exact division will not be possible in the case of 1/3. Another approach is just to use the maximum precision specified by the module but that also leads to possible borkage:
getcontext().prec = MAX_PREC Decimal('1') / Decimal('3') Python(74959,0x10b4cd5c0) malloc: can't allocate region *** mach_vm_map(size=421052631578947584) failed (error code=3) Python(74959,0x10b4cd5c0) malloc: *** set a breakpoint in malloc_error_break to debug Traceback (most recent call last): File "<stdin>", line 1, in <module> MemoryError
A better approach is to think carefully about each individual operation and bound the number of digits that could be needed for any possible exact result: from decimal import * def exact_divide(a, b): num_digits = lambda d: len(d.as_tuple().digits) digits_bound = num_digits(a) + 4 * num_digits(b) ctx = getcontext().copy() ctx.traps[Inexact] = True ctx.prec = digits_bound return ctx.divide(a, b) I haven't thought too hard about corner cases but I think that something like num_digits(a) + 4 * num_digits(b) bounds the possible number of digits for any possible exact a/b. After cancelling the gcd of the mantissas we can only divide exactly by 2, 5 or some combination of those. Each factor of 2 or 5 in the divisor requires one extra digit so the worst case is dividing by a power of 2 and 10**n < (2**4)**n. So I think the number of extra digits needed is no more than 4 times the number of digits in the divisor. That gives:
exact_divide(Decimal(1), Decimal(8)) Decimal('0.125') a = 123456789 ** 10 exact_divide(Decimal(a**2), Decimal(a)) Decimal('822526259147102579504761143661535547764137892295514168093701699676416207799736601') exact_divide(Decimal(1), Decimal(3)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "asd.py", line 10, in exact_divide return ctx.divide(a, b) decimal.Inexact: [
]
Of course some of these operations could lead to a lot of memory usage like exact_add(Decimal('1e10000000000000'), Decimal('1')) but there can also be ways to limit the maximum precision that can be used by exact_add without requiring the working precision itself to be at that level for *everything* (including 1/3). The standards on which the decimal module is based do not require that there be functions for doing this as they are based on the idea that the precision is not necessarily configurable. Python's implementation does provide configurable precision though and also makes it possible to represent an individual decimal object with any number of digits regardless of the active context. It would be useful in some cases to have functions in the decimal module for performing exactifpossible operations like this. Potentially it could make sense as a special context although I don't know how easy it would be to implement like that. Also an obvious missing feature is the ability to convert from Fraction to Decimal exactly (when the result can be exact but raising otherwise). In that case we pretty much know that if the Fraction can be represented within the constraints of memory then any corresponding exact Decimal can be as well. The reverse is not the case even though it is possible to convert exactly from Decimal to Fraction:
Fraction(Decimal('1e100000000000')) # this is dangerous ^C
 Oscar
[Steven D'Aprano
... (To be honest, I was surprised to learn that the context precision is ignored when creating a Decimal from a string. I still find it a bit odd that if I set the precision to 3, I can still create a Decimal with twelve digits.)
You're not alone ;) In the original IEEE 754 standard, stringtofloat was "an operation" like any other, required to honor the current rounding mode (and all the rest of the usual rules). It still is, in later revisions. But they don't cater to a single data type supporting multiple precisions  they have a distinct data type for each distinct precision. I suggested following the standards' rules (the constructor works the same way as everything else  it rounds) for Python's module too, but Mike Cowlishaw (the decimal spec's primary driver) overruled me on that. I was really annoyed at the time  but in practice like it fine. Nevertheless, it still feels more ike "a wart" than "a feature" ;)
On 15/10/20 1:59 pm, Tim Peters wrote:
I suggested following the standards' rules (the constructor works the same way as everything else  it rounds) for Python's module too, but Mike Cowlishaw (the decimal spec's primary driver) overruled me on that.
Did he offer a rationale for that?  Greg
[Tim]
I suggested following the standards' rules (the constructor works the same way as everything else  it rounds) for Python's module too, but Mike Cowlishaw (the decimal spec's primary driver) overruled me on that.
[Greg Ewing
Did he offer a rationale for that?
Possibly, but I don't know ) At that point, I believe it was Raymond Hettinger who was working on Python's `decimal`, and I was discussing it (offline) with him, not with Mike directly at that time. Whoever it was, they informed me about Mike's decision. While I don't _much_ care, I still think it was a wrong decision, and for an objective reason. Say you have code like pi = Decimal("3.14159265358979323846264338327950288419716939937510") Almost all other implementations of the decimal spec have distinct datatypes for each precision they support (typically less than a handful of fixed possibilities), and when porting that code to run under any such other implementation it _willl_ (must!) round to that implementation's default precision type (or to whatever precision type `pi` was declared to belong to). But in all subsequent uses of `pi`, Python will use the full 50 digits regardless of the thencurrent context precision. So it's a predictable source of "mysterious" numeric differences across implementations, something these extremely detailed standards intended to make a thing of the past. People aware of  and concerned about  that possibility have an easy workaround, though: they can spell the start of the RHS as "+Decimal". The unary plus is enough to force rounding to current context precision.
On Thu, 8 Oct 2020 at 16:59, Random832
I was making a "convert Fraction to Decimal, exactly if possible" function and ran into a wall: it's not possible to do some of the necessary operations with exact precision in decimal:
 multiplication  division where the result can be represented exactly [the divisor is an integer whose prime factors are only two and five, or a rational number whose numerator qualifies]
I assume there's some way around it that I haven't spent enough time to figure out [create a temporary context with sufficint digits for multiplication, and work out the reciprocal power of 10 by hand to use this multiplication to implement division], but I feel like these exact operations should be supported in the standard library.
It should be possible to do this by bounding the number of digits
required for an exact operation. We can count the number of digits in
the mantissa of a Decimal like this:
In [100]: num_digits = lambda d: len(d.as_tuple().digits)
In [101]: num_digits(Decimal('12.34'))
Out[101]: 4
If d3 = d1 * d2 then num_digits(d3) <= num_digits(d1) + num_digits(d2)
so we can easily bound the number of digits needed for an exact
multiplication. We can create a context for a number of digits and use
it for multiplication like so:
In [106]: ctx = getcontext().copy()
In [107]: ctx.prec = 4
In [108]: ctx.multiply(Decimal('991'), Decimal('12'))
Out[108]: Decimal('1.189E+4')
So then the result without rounding is:
In [109]: d2 = Decimal('991')
In [110]: d3 = Decimal('12')
In [111]: ctx.prec = num_digits(d2) + num_digits(d3)
In [112]: ctx.multiply(Decimal('991'), Decimal('12'))
Out[112]: Decimal('11892')
For division it is a little more complicated. You say the division is
exact so for some nonnegative integer k and integer n the divisor is
either:
a) 2**k * 10**n
b) 5**k * 10**n
Dividing/multiplying by 10 does not require any increase in precision
but dividing by 2 or 5 requires at most 1 extra digit so:
In [114]: d = Decimal('123.4')
In [115]: ctx.prec = num_digits(d) + 1
In [116]: ctx.divide(d, 2)
Out[116]: Decimal('61.7')
Of course to make that usable you'll need to calculate k.
Also you'll want to set the Inexact trap:
In [119]: ctx.traps[Inexact] = True
In [120]: ctx.divide(d, 3)

Inexact Traceback (most recent call last)
<ipythoninput1209e7694c7c9ce> in <module>
> 1 ctx.divide(d, 3)
Inexact: [
participants (9)

Chris Angelico

Greg Ewing

Guido van Rossum

Marco Sulla

Oscar Benjamin

Random832

Steven D'Aprano

Tim Peters

Wes Turner