
[Paul Moore]
I can't see why, if you can first get an (effectively, according to whatever rules you want to apply) exact Decimal representation of your "number", you can't do any further scaling and changing of precision, etc, entirely with Decimal instances, and with minimal loss of runtime efficiency.
This is effectively saying
(1) Create a decimal using the default context. (2) Change the context to my custom context. (3) Perform various rounding and scaling operations. (4) Change the context back to the default.
vs
(1) Create a decimal using my custom context.
The four-step procedure may (or may not) be done just as efficiently under the covers, but it is ugly.
Is there any reason why input and output should be the only operations that do not honor an optional local context?
-jJ