I like the idea a lot, but I recognize it will get a lot pushback. I think learning Integer -> Decimal -> Float is a lot more natural than learning Integer -> Float -> Decimal. The Float type represents a specific hardware accelleration with data-loss tradeoffs, and the use should be explicit. I think that as someone learns, the limitations of Decimals will make a lot more sense than those of Floats. +1 On Sat, Sep 29, 2012 at 3:51 PM, Gregory P. Smith <greg@krypto.org> wrote:
-cc: python-dev +cc: python-ideas
On Sat, Sep 29, 2012 at 11:39 AM, Chris Angelico <rosuav@gmail.com> wrote:
On Sun, Sep 30, 2012 at 4:26 AM, Brett Cannon <brett@python.org> wrote:
Does this mean we want to re-open the discussion about decimal constants? Last time this came up I think we decided that we wanted to wait for cdecimal (which is obviously here) and work out how to handle contexts, the syntax, etc.
Just to throw a crazy idea out: How bad a change would it be to make decimal actually the default?
(Caveat: I've not worked with decimal/cdecimal to any real extent and don't know its limitations etc.)
Painful for existing code, unittests and extension modules. Definitely python-ideas territory (thread moved there with an appropriate subject).
I'm not surprised at all that a decimal type can be "fast" in an interpreted language due to the already dominant interpreter overhead.
I wish all spreadsheets had used decimals from day one rather than binary floating point (blame Lotus?). Think of the trouble that would have saved the world.
-gps
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy