On Fri, Oct 9, 2020, at 17:38, Tim Peters wrote:

[Random832 <random832@fastmail.com>]

My suggestion was for a way to make it so that if an exact result is exactly representable at any precision you get that result, with rounding only applied for results that cannot be represented exactly regardless of precision.

That may have been the suggestion in your head ;-) , but - trust me on this - it's taken a long time to guess that from what you wrote. Again, you could have saved us all a world of troubles by giving concrete examples.

The problem is that the issue wasn't so much a concrete use case [other than the one thing with fraction, which doesn't exercise nearly all of the things that I had been suggesting changes for], but a general sense of unease with the philosophy of the decimal module - the fact that people are willing to say things like "if an exact result is exactly representable, then that's the result you get" when that's not actually true, because they've twisted the meaning of "exactly representable" up in knots. And the documentation isn't very up front about this - for example where it talks about how significant figures work for multiplication, it doesn't talk about the context rounding at the end. It says "For multiplication, the “schoolbook” approach uses all the figures in the multiplicands. For instance, 1.3 * 1.2 gives 1.56 while 1.30 * 1.20 gives 1.5600.", - this would be a good time to mention that it will be rounded if the context precision is less than 5. It was omissions like these that led me to believe that addition and subtraction *already* behaved as I was suggesting when I first posted this thread.

What you wrote just now adds another twist: apparently you DO want rounding in some cases ("results that cannot be represented exactly regardless of precision"). The frac2dec function I suggested explicitly raised a ValueError in that case instead, and you said at the time that function would do what you wanted.

It seems like this is incompatible with the design of the decimal module, but it's ***absolutely*** untrue that "if an exact result is exactly representable, then that's the result you get", because *the precision is not part of the representation format*.

Precision certainly is part of the model in the standards the decimal module implements.

But the context precision is not part of the actual data format of a decimal number. The number Decimal('1.23456789') is an identical object no matter what the context precision is, even if that precision is less than 9. Even though it's part of the *model* used for calculations, it's not relevant to the *representation*, so it has no effect on the set of values that are exactly representable. So, "if the exact result is exactly representable that's what you get" is plainly false.

No, it does: for virtually every operation, apart from the constructor, "exactly representable" refers to the context's precision setting.

That is nonsense. "exactly representable" is a plain english phrase and has a clear meaning that only involves the actual data format, not the context.

If you don't believe that, see how and when the "inexact" flag gets set. It's the very meaning of the "inexact" flag that "the infinitely precise result was not exactly representable (i.e., rounding lost some information)".

Nonsense. The meaning of the inexact flag means that the module *chose* to discard some information. There is no actual limit [well, except for much larger limits related to memory allocation and array indexing] to the number of digits that are *representable*, the context precision is an external limit that is not part of the *representation* itself.