I can't see why, if you can first get an (effectively, according to whatever rules you want to apply) exact Decimal representation of your "number", you can't do any further scaling and changing of precision, etc, entirely with Decimal instances, and with minimal loss of runtime efficiency.
This is effectively saying
(1) Create a decimal using the default context. (2) Change the context to my custom context. (3) Perform various rounding and scaling operations. (4) Change the context back to the default.
(1) Create a decimal using my custom context.
The four-step procedure may (or may not) be done just as efficiently under the covers, but it is ugly.
Is there any reason why input and output should be the only operations that do not honor an optional local context?
[Jewett, Jim J]
... Is there any reason why input and output should be the only operations that do not honor an optional local context?
While all operations in the spec except for the two to-string operations use context, no operations in the spec support an optional local context. That the Decimal() constructor ignores context by default is an extension to the spec. We must supply a context-honoring from-string operation to meet the spec. I recommend against any concept of "local context" in any operation -- it complicates the model and isn't necessary. Piles of "convenience features" should wait until people actually use this in real life, so can judge what's truly clumsy based on experience instead of speculation.