Floating point multiplication in python

Steven D'Aprano steve+comp.lang.python at pearwood.info
Wed Sep 7 06:51:13 CEST 2011

On Wed, 7 Sep 2011 02:07 am Thomas 'PointedEars' Lahn wrote:

> Thomas Rachel wrote:
>> Now if you multiply two values with an error, the error also propagates
>> into the result - PLUs the result can have its own error source - in the
>> same order of magnitude.
>> (a+e) * (a+e) = a*a + 2*a*e + e*e. So your new error term is 2*a*e + e*e
>> or (2*a + e) * e.
> Your explanation about floating-point precision, which I already knew
> about but have only scanned here – so it might be flawed as well –,
> notwithstanding, it is not clear to me at all what you are trying to prove
> there.
> Computers (well, perhaps outside of mathematical software) do NOT compute
> an equation as humans would do, so the binomial theorem does NOT apply. 

I think you have misunderstood. The binomial theorem always applies. Any
time you multiply two numbers, both numbers can always be re-written as a
sum of two numbers:

10*5 = (6+4)*(2+3)

So a perfect square can always be re-written in the form where the binomial
theorem applies:

5*5 = (2+3)*(2+3)
25 = 2*2 + 2*3 + 3*2 + 3*3
25 = 4 + 6 + 6 + 9
25 = 25

The binomial theorem is not a part of the algorithm for performing
multiplication. It is part of the analysis of the errors that occur during
multiplication. The actual mechanics of how bits are flipped is irrelevant.

Any floating point number x should be considered as equal to (a+e), where a
is the number actually wanted by the user, and e the error term forced upon
the user by the use of binary floats. (If you're lucky, e=0.) Generally,
both a and e are unknown, but of course their sum is known -- it's just the
float x.

So given a float x, when you square it you get this:

Exact values: a*a = a**2

Float values: x*x = (a+e)(a+e)
                  = a**2 + 2*a*e + e**2

So the error term has increased from e to (2*a*e+e**2). It is usual to
assume that e**2 is small enough that it underflows to zero, so we have the
error term e increasing to 2*a*e as a fairly simple estimate of the new

> In an algorithm of the real implementation,
>   (a + e) * (a + e)
> would be computed as follows:
>   b := a + e
>   c := b * b
>     or
>   c := b + … + b
> [add the value of `b' to the value of `b' (b−1) times, since binary logic
> does not support multiplication]

What you probably mean to say is that binary hardware usually implements
multiplication via repeated addition.


If you don't mean that, I can't imagine what you are trying to say.
Multiplication of numbers exists in any base, binary no less than decimal.

> IOW, the error does propagate into the result indeed, but not as you
> described.  Indeed, thanks to rounding on assignment and multiplication
> (i. e., setting register values or IEEE-754 floating-point mantissa and
> exponent), the error will be different, probably greater than you compute
> here.

There may be other sources of error, particularly when multiplying two
numbers of greatly different magnitudes, but that doesn't invalidate Thomas
Rachel's point (which is only an estimate, of course).

We can see how much error is actually there by using exact arithmetic:

Error in float 1.1:

>>> from fractions import Fraction as F
>>> >>> a = F(11, 10) 
>>> x = F.from_float(1.1)
>>> e = x - a
>>> print e

Error in float 1.1*1.1:

>>> b = F(11, 10)**2
>>> y = F.from_float(1.1**2)
>>> f = y - b
>>> print f

which is slightly more than double e above, and slightly less than our
estimate of 2*a*e = 11/56294995342131200

So we can conclude that, at least for 1.1**2, Python floats are more
accurate than we would expect from a simple application of the binomial
theorem. (For implementations using IEEE doubles.)


More information about the Python-list mailing list