On 8 March 2014 18:55, Antoine Pitrou email@example.com wrote:
On Sat, 8 Mar 2014 18:49:02 +0000 Mark Dickinson firstname.lastname@example.org wrote: >
If the proposal goes forward, I'll live with it,
and will simply avoid
Decimal type to convert from floats or ints. But I'd really
prefer to keep the short Decimal(x) spelling as something simple and
non-magic, and find more complicated ways (quotes!) of spelling the magic
For the record, as a non-float expert, Mark's arguments convince me.
Just my 2 cents ;-)
I am in the same position. I "instinctively" wanted Decimal(1.1) to return the same value as Decimal('1.1'), but as I read Mark's argument, it became clearer to me that doing so involves a deliberate loss of precision that is not present anywhere else in the chain of conversions.
literal 1.1 -> float 1.1 is a precise conversion, in the sense that no other float better represents the (decimal) literal 1.1. The current float 1.1 -> decimal conversion is exact.
Converting float 1.1 to Decimal('1.1') deliberately drops precision, even though an exact conversion is possible. I don't think that this is something that should happen implicitly.
On the other hand, the reason people do Decimal(1.1) is because they want a decimal value of 1.1. (Doh.) The fact that this isn't what they are actually calculating is a mistake they make because Python doesn't make it easy to get the decimal value 1.1 (I know the correct approach is Decimal('1.1'), but I don't think that is obvious or intuitive). Decimal literals (1.1d) would resolve this issue without introducing implicit rounding operations.
 The reason Decimal('1.1') feels non-obvious to me is that it feels like it's getting to a decimal "via" a string. Of course, Decimal(1.1) is also a going "via" a float. The value of the current behaviour is that it leads people to an understanding that this is what it is.