On Thu, Oct 8, 2020, at 16:07, Tim Peters wrote:
See above. It's very much a feature of the IEEE 754/854 standards (of which family Python's decimal model is a part) that if an exact result is exactly representable, then that's the result you get.
My suggestion was for a way to make it so that if an exact result is exactly representable at any precision you get that result, with rounding only applied for results that cannot be represented exactly regardless of precision. It seems like this is incompatible with the design of the decimal module, but it's ***absolutely*** untrue that "if an exact result is exactly representable, then that's the result you get", because *the precision is not part of the representation format*. What threw me off is the fact that there is a single type that represents an unlimited number of digits, rather than separate types for each precision. I don't think that feature is shared with IEEE, and it creates a philosophical question of interpretation [the answer of which we clearly disagree on] of what it means for a result to be "exactly representable", which doesn't exist with the IEEE fixed-length formats. At this point, doing the math in Fractions and converting back and forth to Decimal as necessary is probably good enough, though.
But that's apparently not what you're asking for.
... Incidentally, I also noticed the procedure suggested by the documentation for doing fixed point arithmetic can result in incorrect double rounding in some situations:
(D('1.01')*D('1.46')).quantize(TWOPLACES) # true result is 1.4746 Decimal('1.48')
Are you using defaults, or did you change something? Here under 3.9.0, showing everything:
This was with the precision set to 4, I forgot to include that. With default precision the same principle applies but needs much longer operands to demonstrate. The issue is when there is a true result that the last digit within the context precision is 4, the one after it is 6, 7, 8, or 9, and the one before it is odd. The 4 is rounded up to 5, and that 5 is used to round up the previous digit. Here's an example with more digits - it's easy enough to generalize.
ctx.prec=28 (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001')) Decimal('1.49999999999998') ctx.prec=29 (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001')) Decimal('1.49999999999997')
The true result of the multiplication here is 1.4999999999999749999999999996, which requires 29 digits of precision [and, no, it's not just values that look like 999999 and 000001, but a brute force search takes much longer for 15-digit operands than 3-digit ones]
I didn't know that Decimal supported E notation in the constructor, so I thought I would have to multiply or divide by a power of ten directly [which would therefore have to be rounded]... going through the string constructor seems extremely inefficient and inelegant, though,
Not at all! Conversion between decimal strings and decimal.Decimal objects is close to trivial. It's straightforward and linear-time (in the number of significant digits).
I think for some reason I'd assumed the mantissa was represented as a binary number, since the .NET decimal format [which isn't arbitrary-precision] does that I should probably have looked over the implementation more before jumping in.
I'd like a way to losslessly multiply or divide a decimal by a power of ten at least... a sort of decimal equivalent to ldexp.
Again, without concrete examples, that's clear as mud to me.
Er, in this case the conversion of fraction to decimal *is* the concrete example, it's a one-for-one substitution for the use of the string constructor: ldexp(n, -max(e2, e5)) in place of D(f"{n}E-{max(e2, e5}").