[Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

Anne Archibald archibald at astron.nl
Sat Dec 12 14:15:43 EST 2015


On Fri, Dec 11, 2015, 18:04 David Cournapeau <cournape at gmail.com> wrote:

On Fri, Dec 11, 2015 at 4:22 PM, Anne Archibald <archibald at astron.nl> wrote:

Actually, GCC implements 128-bit floats in software and provides them as
__float128; there are also quad-precision versions of the usual functions.
The Intel compiler provides this as well, I think, but I don't think
Microsoft compilers do. A portable quad-precision library might be less
painful.

The cleanest way to add extended precision to numpy is by adding a
C-implemented dtype. This can be done in an extension module; see the
quaternion and half-precision modules online.

We actually used __float128 dtype as an example of how to create a custom
dtype for a numpy C tutorial we did w/ Stefan Van der Walt a few years ago
at SciPy.

IIRC, one of the issue to make it more than a PoC was that numpy hardcoded
things like long double being the higest precision, etc... But that may has
been fixed since then.

I did some work on numpy's long-double support, partly to better understand
what would be needed to make quads work. The main obstacle is, I think, the
same: python floats are only 64-bit, and many functions are stuck passing
through them. It takes a lot of fiddling to make string conversions work
without passing through python floats, for example, and it takes some care
to produce scalars of the appropriate type. There are a few places where
you'd want to modify the guts of numpy if you had a higher precision
available than long doubles.

Anne
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20151212/2c756063/attachment.html>


More information about the NumPy-Discussion mailing list