Guido van Rossum, 29.09.2012 23:06:
On Sat, Sep 29, 2012 at 1:34 PM, Calvin Spealman wrote:
I like the idea a lot, but I recognize it will get a lot pushback. I think learning Integer -> Decimal -> Float is a lot more natural than learning Integer -> Float -> Decimal. The Float type represents a specific hardware accelleration with data-loss tradeoffs, and the use should be explicit. I think that as someone learns, the limitations of Decimals will make a lot more sense than those of Floats.
Hm. Remember decimals have data loss too: they can't represent 1/3 any more accurately than floats can
That would be "fractions" territory. Given that all three have their own area where they shine and a large enough area where they really don't, I can't see a strong enough argument for making any of the three "the default". Also note that current float is a very C friendly thing, whereas the other two are far from it. Stefan anecdotal PS: I recently got caught in a discussion about the impressive ugliness of decimals in Java, the unsuitability of float in a financial context and the lack of a better alternative in the programming languages that most people use in that area. I got a couple of people surprised by the fact that Python has fractions right in its standard library.