[Numpy-discussion] Precision difference between dot and sum

Charles R Harris charlesr.harris at gmail.com
Mon Nov 1 21:21:05 EDT 2010


On Mon, Nov 1, 2010 at 5:30 PM, Joon <groups.and.lists at gmail.com> wrote:

> Hi,
>
> I just found that using dot instead of sum in numpy gives me better results
> in terms of precision loss. For example, I optimized a function with
> scipy.optimize.fmin_bfgs. For the return value for the function, I tried the
> following two things:
>
> sum(Xb) - sum(denominator)
>
> and
>
> dot(ones(Xb.shape), Xb) - dot(ones(denominator.shape), denominator)
>
> Both of them are supposed to yield the same thing. But the first one gave
> me -589112.30492110562 and the second one gave me -589112.30492110678.
>
> In addition, with the routine using sum, the optimizer gave me "Warning:
> Desired error not necessarily achieved due to precision loss." With the
> routine with dot, the optimizer gave me "Optimization terminated
> successfully."
> I checked the gradient value as well (I provided analytical gradient) and
> gradient was smaller in the dot case as well. (Of course, the the magnitude
> was e-5 to e-6, but still)
>
> I was wondering if this is well-known fact and I'm supposed to use dot
> instead of sum whenever possible.
>
> It would be great if someone could let me know why this happens.
>

Are you running on 32 bits or 64 bits? I ask because there are different
floating point precisions on the 32 bit platform and the results can depend
on how the compiler does things. The relative difference between your
results is ~1e-15, which isn't that far from the float64 precision of
~2e-16, so little things can make a difference.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20101101/e9e51303/attachment.html>


More information about the NumPy-Discussion mailing list