Re: [Python-ideas] make decimal the default non-integer instead of float?
-cc: python-dev +cc: python-ideas On Sat, Sep 29, 2012 at 11:39 AM, Chris Angelico <rosuav@gmail.com> wrote:
Does this mean we want to re-open the discussion about decimal constants? Last time this came up I think we decided that we wanted to wait for cdecimal (which is obviously here) and work out how to handle contexts,
On Sun, Sep 30, 2012 at 4:26 AM, Brett Cannon <brett@python.org> wrote: the
syntax, etc.
Just to throw a crazy idea out: How bad a change would it be to make decimal actually the default?
(Caveat: I've not worked with decimal/cdecimal to any real extent and don't know its limitations etc.)
Painful for existing code, unittests and extension modules. Definitely python-ideas territory (thread moved there with an appropriate subject). I'm not surprised at all that a decimal type can be "fast" in an interpreted language due to the already dominant interpreter overhead. I wish all spreadsheets had used decimals from day one rather than binary floating point (blame Lotus?). Think of the trouble that would have saved the world. -gps
I like the idea a lot, but I recognize it will get a lot pushback. I think learning Integer -> Decimal -> Float is a lot more natural than learning Integer -> Float -> Decimal. The Float type represents a specific hardware accelleration with data-loss tradeoffs, and the use should be explicit. I think that as someone learns, the limitations of Decimals will make a lot more sense than those of Floats. +1 On Sat, Sep 29, 2012 at 3:51 PM, Gregory P. Smith <greg@krypto.org> wrote:
-cc: python-dev +cc: python-ideas
On Sat, Sep 29, 2012 at 11:39 AM, Chris Angelico <rosuav@gmail.com> wrote:
On Sun, Sep 30, 2012 at 4:26 AM, Brett Cannon <brett@python.org> wrote:
Does this mean we want to re-open the discussion about decimal constants? Last time this came up I think we decided that we wanted to wait for cdecimal (which is obviously here) and work out how to handle contexts, the syntax, etc.
Just to throw a crazy idea out: How bad a change would it be to make decimal actually the default?
(Caveat: I've not worked with decimal/cdecimal to any real extent and don't know its limitations etc.)
Painful for existing code, unittests and extension modules. Definitely python-ideas territory (thread moved there with an appropriate subject).
I'm not surprised at all that a decimal type can be "fast" in an interpreted language due to the already dominant interpreter overhead.
I wish all spreadsheets had used decimals from day one rather than binary floating point (blame Lotus?). Think of the trouble that would have saved the world.
-gps
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy
On Sat, Sep 29, 2012 at 1:34 PM, Calvin Spealman <ironfroggy@gmail.com> wrote:
I like the idea a lot, but I recognize it will get a lot pushback. I think learning Integer -> Decimal -> Float is a lot more natural than learning Integer -> Float -> Decimal. The Float type represents a specific hardware accelleration with data-loss tradeoffs, and the use should be explicit. I think that as someone learns, the limitations of Decimals will make a lot more sense than those of Floats.
Hm. Remember decimals have data loss too: they can't represent 1/3 any more accurately than floats can, and like floats they are limited to a certain number of digits after which they begin dropping precision even if the result *can* be represented exactly. It's just that they can represent 1/5 exactly, which happens to be culturally important to humans, and that the number of digits at which loss of precision happens is configurable. (And the API to configure it may actually make it more complex to learn.) -- --Guido van Rossum (python.org/~guido)
Guido van Rossum, 29.09.2012 23:06:
On Sat, Sep 29, 2012 at 1:34 PM, Calvin Spealman wrote:
I like the idea a lot, but I recognize it will get a lot pushback. I think learning Integer -> Decimal -> Float is a lot more natural than learning Integer -> Float -> Decimal. The Float type represents a specific hardware accelleration with data-loss tradeoffs, and the use should be explicit. I think that as someone learns, the limitations of Decimals will make a lot more sense than those of Floats.
Hm. Remember decimals have data loss too: they can't represent 1/3 any more accurately than floats can
That would be "fractions" territory. Given that all three have their own area where they shine and a large enough area where they really don't, I can't see a strong enough argument for making any of the three "the default". Also note that current float is a very C friendly thing, whereas the other two are far from it. Stefan anecdotal PS: I recently got caught in a discussion about the impressive ugliness of decimals in Java, the unsuitability of float in a financial context and the lack of a better alternative in the programming languages that most people use in that area. I got a couple of people surprised by the fact that Python has fractions right in its standard library.
Instructive story about fractions: http://python-history.blogspot.com/2009/03/problem-with-integer-division.htm... . Let's not fall into the same trap.
Serhiy Storchaka, 30.09.2012 18:35:
Instructive story about fractions: http://python-history.blogspot.com/2009/03/problem-with-integer-division.htm...
Sorry - I don't get it. Instructive in what way? Stefan
participants (5)
-
Calvin Spealman
-
Gregory P. Smith
-
Guido van Rossum
-
Serhiy Storchaka
-
Stefan Behnel