[Numpy-discussion] Precision/value change moving from C to Python

Daπid davidmenhur at gmail.com
Wed Nov 13 04:25:17 EST 2013


On 13 November 2013 02:40, Bart Baker <bartbkr at gmail.com> wrote:

> > That is the order of the machine epsilon for double, that looks like
> roundoff
> > errors to me.
>
>
> I'm trying to my head around this. So does that mean that neither of
> them is "right", that it is just the result of doing the same
> calculation two different ways using different computational libraries?


Essentially, yes.

I am tempted to say that, depending on the compiler flags, the C version
*could* be more accurate, as the compiler can reorganise the operations and
reduce the number of steps. But also, if it is optimised for speed, it
could be using faster and less accurate functions and techniques.

In any case, if that 10^-16 matters to you, I'd say you are either doing
something wrong or using the wrong dtype; and without knowing the
specifics, I would bet on the first one. If you really need that precision,
you would have to use more bits, and make sure your library supports that
dtype. I believe the following proves that (my) numpy.cos can deal with 128
bits without converting it to float.

>>> a = np.array([1.2584568431575694895413875135786543], dtype=np.float128)
>>> np.cos(a)-np.cos(a.astype(np.float64))
array([ 7.2099444e-18], dtype=float128)


The bottom line is, don't believe nor trust the least significant bits of
your floating point numbers.


/David.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20131113/d48f61c5/attachment.html>


More information about the NumPy-Discussion mailing list