I see three sane options for float to Decimal conversion:
1. Raise an exception.
2. Round to the nearest Decimal value using the current context for that round operation.
3. Do what we're currently doing, and do an exact conversion.
Proposals for change should also take into consideration that Decimal already does *exact* conversions for integers (and I believe has done since it first existed). It would be quite surprising for `Decimal(2**1000)` and `Decimal(2.0**1000)` to be different numbers. If we change the float behaviour, we might also want to change conversion from int to round to the nearest Decimal using the current context. Again, that's closer to what IEEE 754 specifies for the "convertFromInt" operation.
On the other hand, we probably shouldn't lend *too* much weight to IEEE 754, especially when talking about choice of precision. IEEE 754 isn't a perfect fit for Decimal: the IEEE standard is mostly concerned with fixed width decimal formats, which is subtly different from Mike Cowlishaw's approach of "extensible precision" where the precision is not so closely tied to the format. Python's decimal module is based on Cowlishaw's standard, not on IEEE 754.