I suggested following the standards' rules (the constructor works the same way as everything else - it rounds) for Python's module too, but Mike Cowlishaw (the decimal spec's primary driver) overruled me on that.
[Greg Ewing email@example.com]
Did he offer a rationale for that?
Possibly, but I don't know -) At that point, I believe it was Raymond Hettinger who was working on Python's `decimal`, and I was discussing it (offline) with him, not with Mike directly at that time. Whoever it was, they informed me about Mike's decision.
While I don't _much_ care, I still think it was a wrong decision, and for an objective reason. Say you have code like
pi = Decimal("3.14159265358979323846264338327950288419716939937510")
Almost all other implementations of the decimal spec have distinct datatypes for each precision they support (typically less than a handful of fixed possibilities), and when porting that code to run under any such other implementation it _willl_ (must!) round to that implementation's default precision type (or to whatever precision type `pi` was declared to belong to).
But in all subsequent uses of `pi`, Python will use the full 50 digits regardless of the then-current context precision.
So it's a predictable source of "mysterious" numeric differences across implementations, something these extremely detailed standards intended to make a thing of the past.
People aware of - and concerned about - that possibility have an easy workaround, though: they can spell the start of the RHS as "+Decimal". The unary plus is enough to force rounding to current context precision.