[Numpy-discussion] assigning full precision values to longdouble scalars
sebastian at sipsolutions.net
Tue Feb 25 19:54:52 EST 2014
On Di, 2014-02-25 at 17:52 -0500, Scott Ransom wrote:
> Hi All,
> So I have a need to use longdouble numpy scalars in an application, and
> I need to be able to reliably set long-double precision values in them.
> Currently I don't see an easy way to do that. For example:
> In : numpy.longdouble("1.12345678901234567890")
> Out: 1.1234567890123456912
> Note the loss of those last couple digits.
> In : numpy.float("1.12345678901234567890")
> Out: 1.1234567890123457
> In : numpy.longdouble("1.12345678901234567890") -
> Out: 0.0
> And so internally they are identical.
> In this case, the string appears to be converted to a C double (i.e.
> numpy float) before being assigned to the numpy scalar. And therefore
> it loses precision.
> Is there a good way of setting longdouble values? Is this a numpy bug?
Yes, this is a bug I think (never checked), we use the python parsing
functions where possible. But for longdouble python float (double) is
obviously not enough. A hack would be to split it into two:
np.float128(1.1234567890) + np.float128(1234567890e-something)
Though it would be better for the numpy parser to parse the full
precision when given a string.
> I was considering using a tiny cython wrapper of strtold() to do a
> conversion from a string to a long double, but it seems like this is
> basically what should be happening internally in numpy in the above example!
More information about the NumPy-Discussion