[Python-ideas] Python Float Update
nicholas.chammas at gmail.com
Mon Jun 1 19:24:32 CEST 2015
Well, I learned a lot about decimals today. :)
On Mon, Jun 1, 2015 at 3:08 AM, Nick Coghlan ncoghlan at gmail.com
In a world of binary computers, no programming language is free of
those constraints - if you choose decimal literals as your default,
you take a *big* performance hit, because computers are designed as
binary systems. (Some languages, like IBM’s REXX, do choose to use
decimal integers by default)
I guess it’s a non-trivial tradeoff. But I would lean towards considering
people likely to be affected by the performance hit as doing something “not
common”. Like, if they are doing that many calculations that it matters,
perhaps it makes sense to ask them to explicitly ask for floats vs.
decimals, in exchange for giving the majority who wouldn’t notice a
performance difference a better user experience.
On Mon, Jun 1, 2015 at 10:58 AM, Steven D’Aprano steve at pearwood.info
I wish this myth about Decimals would die, because it isn’t true.
Your email had a lot of interesting information about decimals that would
make a good blog post, actually. Writing one up will perhaps help kill this
myth in the long run :)
In the past, I’ve found that people are very resistant to this fact, so
I’m going to show a few examples of how Decimals violate the fundamental
laws of mathematics just as floats do.
How many of your examples are inherent limitations of decimals vs. problems
that can be improved upon?
Admittedly, the only place where I’ve played with decimals extensively is
on Microsoft’s SQL Server (where they are the default literal
<https://msdn.microsoft.com/en-us/library/ms179899.aspx>). I’ve stumbled in
the past on my own decimal gotchas
<http://dba.stackexchange.com/q/18997/2660>, but looking at your examples
and trying them on SQL Server I suspect that most of the problems you show
are problems of precision and scale.
Perhaps Python needs better rules for how precision and scale are affected
by calculations (here are SQL Server’s
<https://msdn.microsoft.com/en-us/library/ms190476.aspx>, for example), or
better defaults when they are not specified?
Anyway, here’s what happens on SQL Server for some of the examples you
py> from decimal import Decimal as D
py> x = D(10)**30
py> x == x + 100 # should be False
DECLARE @x DECIMAL(38,0) = '1' + REPLICATE(0, 30);
IF @x = @x + 100
SELECT 'equal' AS adding_100ELSE
SELECT 'not equal' AS adding_100
Gives “not equal” <http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/1645/0>.
Leaving out the precision when declaring @x (i.e. going with the default
precision of 18 <https://msdn.microsoft.com/en-us/library/ms187746.aspx>)
immediately yields an understandable data truncation error.
py> a = D(1)/17
py> b = D(5)/7
py> c = D(12)/13
py> (a + b) + c == a + (b+c)
DECLARE @a DECIMAL = 1.0/17;DECLARE @b DECIMAL = 5.0/7;DECLARE @c
DECIMAL = 12.0/13;
IF (@a + @b) + @c = @a + (@b + @c)
SELECT 'equal' AS associativeELSE
SELECT 'not equal' AS associative
Gives “equal” <http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/1656/0>.
py> a = D(15)/2
py> b = D(15)/8
py> c = D(1)/14
py> a*(b+c) == a*b + a*c
DECLARE @a DECIMAL = 15.0/2;DECLARE @b DECIMAL = 15.0/8;DECLARE @c
DECIMAL = 1.0/14;
IF @a * (@b + @c) = @a*@b + @a*@c
SELECT 'equal' AS distributiveELSE
SELECT 'not equal' AS distributive
Gives “equal” <http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/1655/0>.
I think some of the other decimal examples you provide, though definitely
not 100% beginner friendly, are still way more human-friendly because they
are explainable in terms of precision and scale, which we can understand
more simply (“there aren’t enough decimal places to carry the result”) and
which have parallels in other areas of life as Paul pointed out.
- The sorts of errors we see with floats are not “madness”, but the
completely logical consequences of what happens when you try to do
arithmetic in anything less than the full mathematical abstraction.
I don’t mean madness as in incorrect, I mean madness as in difficult to
predict and difficult to understand.
Your examples do show that it isn’t all roses and honey with decimals, but
do you find it easier to understand explain all the weirdness of floats vs.
Understanding float weirdness (and disclaimer: I don’t) seems to require
understanding some hairy stuff, and even then it is not predictable because
there are platform dependent issues. Understanding decimal “weirdness”
seems to require only understanding precision and scale, and after that it
is mostly predictable.
On Mon, Jun 1, 2015 at 11:19 AM Paul Moore <p.f.moore at gmail.com> wrote:
On 1 June 2015 at 15:58, Steven D'Aprano <steve at pearwood.info> wrote:
> > (Decimals *only* win out due to human bias: we don't care too much that
> > 1/7 cannot be expressed exactly as a float using *either* binary or
> > decimal, but we do care about 1/10. And we conveniently ignore the case
> > of 1/3, because familiarity breeds contempt.)
> There is one other "advantage" to decimals - they behave like
> electronic calculators (which typically used decimal arithmetic). This
> is a variation of "human bias" - we (if we're of a certain age, maybe
> today's youngsters are less used to the vagaries of electronic
> calculators :-)) are used to seeing 1/3 displayed as 0.33333333, and
> showing that 1/3*3 = 0.99999999 was a "fun calculator fact" when I was
> at school.
> Python-ideas mailing list
> Python-ideas at python.org
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-ideas