But Decimal(<float>) is relatively new (it was an error in 2.6). I know it's annoying to change it again, but I also have the feeling that there's not much use in producing a Decimal with 53 (or more?) *decimal digits* of precision from a float with 53 *bits* -- at least not by default. Maybe we can relent on this guarantee and make the constructor by default use fewer digits (e.g. using the same algorithm used by repr(<float>)) and have an optional argument to produce the full precision? Or reserve Decimal.from_float() for that?