When a non-expert writes Decimal(1.1), each of the three above outcomes surprises. We know that (1) was unpopular, that's why we changed it. We now know that (3) is unpopular at least in some circles (Mark Harrison can't be the only one who doesn't like it). Changing to (2) wouldn't do much to address this, because the default context has way more precision than float, so it still shows a lot of extraneous digits.
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 16
>>> +Decimal(1.1) # + to simulate rounding to 16 digits
This feels like a strawman, since Decimal(2**1000 + 1) and Decimal(2.0**1000 + 1) produce different outcomes (the latter gives a lot of digits but is one too small), and the similar examples with a large exponent (10000) differ dramatically (the float version raises an exception).
For most purposes it's a fluke that 2.0**1000 can be represented exactly as a float, and the argument doesn't convince me at all. There are just too many odd examples like that, and it always will remain a minefield.
>>> x = 49534541648432951
>>> y = x + 2.0
>>> x < y
>>> Decimal(x) > Decimal(repr(y)) # simulating proposed meaning of Decimal(y)
I wonder what Cowlishaw would say about our current discussion. He is also the father of Rexx...