Hi folks, I recently came across an application I needed quad precision for (high-accuracy solution of a differential equation). I found a C++ library (odeint) that worked for the integration itself, but unfortunately it appears numpy is not able to work with quad precision arrays. For my application the quad precision is only really needed for integrator state, so I can manually convert my data to long doubles as I go to and from numpy, but it's a pain. So quad precision support would be nice. There's a thread discussing quad support: http://mail.scipy.org/pipermail/numpy-discussion/2012-February/061080.html Essentially, there isn't any, but since gcc >= 4.6 supports them on Intel hardware (in software), it should be possible. (Then the thread got bogged down in bike-shedding about what to call them.) What would it take to support quads in numpy? I looked into the numpy base dtype definitions, and it's a complex arrangement involving detection of platform support and templatized C code; in the end I couldn't really see where to start. But it seems to me all the basics are available: native C syntax for basic arithmetic, "qabs"-style versions of all the basic functions, NaNs and Infs. So how would one go about adding quad support? Thanks, Anne
Hi Anne, Long time no see ;) On Wed, Jun 5, 2013 at 10:07 AM, Anne Archibald <archibald@astron.nl> wrote:
Hi folks,
I recently came across an application I needed quad precision for (high-accuracy solution of a differential equation). I found a C++ library (odeint) that worked for the integration itself, but unfortunately it appears numpy is not able to work with quad precision arrays. For my application the quad precision is only really needed for integrator state, so I can manually convert my data to long doubles as I go to and from numpy, but it's a pain. So quad precision support would be nice.
There's a thread discussing quad support: http://mail.scipy.org/pipermail/numpy-discussion/2012-February/061080.html Essentially, there isn't any, but since gcc >= 4.6 supports them on Intel hardware (in software), it should be possible. (Then the thread got bogged down in bike-shedding about what to call them.)
What would it take to support quads in numpy? I looked into the numpy base dtype definitions, and it's a complex arrangement involving detection of platform support and templatized C code; in the end I couldn't really see where to start. But it seems to me all the basics are available: native C syntax for basic arithmetic, "qabs"-style versions of all the basic functions, NaNs and Infs. So how would one go about adding quad support?
There are some improvements for user types committed in 1.8-dev. Perhaps quad support could be added as a user type as it is still platform/compiler dependent. The rational type added to numpy could supply a template for adding the new type. Long term, we need to have quad support. If you run on SUN hardware I think it is already available as the extended precision type, but that doesn't help the majority of users at this point and I don't think LAPACK/BLAS supports it. Chuck
On Wed, Jun 5, 2013 at 5:21 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
Hi Anne,
Long time no see ;)
On Wed, Jun 5, 2013 at 10:07 AM, Anne Archibald <archibald@astron.nl> wrote:
Hi folks,
I recently came across an application I needed quad precision for (high-accuracy solution of a differential equation). I found a C++ library (odeint) that worked for the integration itself, but unfortunately it appears numpy is not able to work with quad precision arrays. For my application the quad precision is only really needed for integrator state, so I can manually convert my data to long doubles as I go to and from numpy, but it's a pain. So quad precision support would be nice.
There's a thread discussing quad support: http://mail.scipy.org/pipermail/numpy-discussion/2012-February/061080.html Essentially, there isn't any, but since gcc >= 4.6 supports them on Intel hardware (in software), it should be possible. (Then the thread got bogged down in bike-shedding about what to call them.)
What would it take to support quads in numpy? I looked into the numpy base dtype definitions, and it's a complex arrangement involving detection of platform support and templatized C code; in the end I couldn't really see where to start. But it seems to me all the basics are available: native C syntax for basic arithmetic, "qabs"-style versions of all the basic functions, NaNs and Infs. So how would one go about adding quad support?
There are some improvements for user types committed in 1.8-dev. Perhaps quad support could be added as a user type as it is still platform/compiler dependent. The rational type added to numpy could supply a template for adding the new type.
I would be in support of that direction as well: let it live separately until CPU/compiler support is coming up. Anne, will you be at scipy conference ? Improving user data type internal API is something I'd like to work on as well David
Looking at the rational module, I think you're right: it really shouldn't be too hard to get quads working as a user type using gcc's __float128 type, which will provide hardware arithmetic in the unlikely case that the user has hardware quads. Alternatively, probably more work, one could use a package like qd to provide portable quad precision (and quad-double). I'll take a look. Anne On Jun 5, 2013 7:10 PM, "David Cournapeau" <cournape@gmail.com> wrote:
Hi Anne,
Long time no see ;)
On Wed, Jun 5, 2013 at 10:07 AM, Anne Archibald <archibald@astron.nl> wrote:
Hi folks,
I recently came across an application I needed quad precision for (high-accuracy solution of a differential equation). I found a C++
(odeint) that worked for the integration itself, but unfortunately it appears numpy is not able to work with quad precision arrays. For my application the quad precision is only really needed for integrator state, so I can manually convert my data to long doubles as I go to and from numpy, but it's a pain. So quad precision support would be nice.
There's a thread discussing quad support:
http://mail.scipy.org/pipermail/numpy-discussion/2012-February/061080.html
Essentially, there isn't any, but since gcc >= 4.6 supports them on Intel hardware (in software), it should be possible. (Then the thread got bogged down in bike-shedding about what to call them.)
What would it take to support quads in numpy? I looked into the numpy
dtype definitions, and it's a complex arrangement involving detection of platform support and templatized C code; in the end I couldn't really see where to start. But it seems to me all the basics are available: native C syntax for basic arithmetic, "qabs"-style versions of all the basic functions, NaNs and Infs. So how would one go about adding quad support?
There are some improvements for user types committed in 1.8-dev. Perhaps quad support could be added as a user type as it is still
On Wed, Jun 5, 2013 at 5:21 PM, Charles R Harris <charlesr.harris@gmail.com> wrote: library base platform/compiler
dependent. The rational type added to numpy could supply a template for adding the new type.
I would be in support of that direction as well: let it live separately until CPU/compiler support is coming up.
Anne, will you be at scipy conference ? Improving user data type internal API is something I'd like to work on as well
David _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sat, 2013-06-08 at 14:35 +0200, Anne Archibald wrote:
Looking at the rational module, I think you're right: it really shouldn't be too hard to get quads working as a user type using gcc's __float128 type, which will provide hardware arithmetic in the unlikely case that the user has hardware quads. Alternatively, probably more work, one could use a package like qd to provide portable quad precision (and quad-double).
In this vague area, and further to a question I asked a while ago on StackOverflow (http://stackoverflow.com/q/9062562/709852), is there some deep reason why on some platforms longdouble is float128 and on other it's float96? cheers, Henry
On Sun, Jun 9, 2013 at 8:35 AM, Henry Gomersall <heng@cantab.net> wrote:
On Sat, 2013-06-08 at 14:35 +0200, Anne Archibald wrote:
Looking at the rational module, I think you're right: it really shouldn't be too hard to get quads working as a user type using gcc's __float128 type, which will provide hardware arithmetic in the unlikely case that the user has hardware quads. Alternatively, probably more work, one could use a package like qd to provide portable quad precision (and quad-double).
In this vague area, and further to a question I asked a while ago on StackOverflow (http://stackoverflow.com/q/9062562/709852), is there some deep reason why on some platforms longdouble is float128 and on other it's float96?
Long double is not standardized (like single/double are), so it is CPU dependent. On Intel CPU, long double is generally translated into the extended precision 80 bits. On 32 bits, it is aligned to 12 bytes (the next multiple of 4 bytes), on 64 bits 16 bytes (the next multiple 8 bytes). MS compilers are a notable exception where long double == double. So it depends on the CPU, the OS and the compiler. Using long double for anything else than compatibility (e.g. binary files) is often a mistake IMO, and highly unportable. David
On Sun, 2013-06-09 at 12:23 +0100, David Cournapeau wrote:
On Sun, Jun 9, 2013 at 8:35 AM, Henry Gomersall <heng@cantab.net> wrote:
On Sat, 2013-06-08 at 14:35 +0200, Anne Archibald wrote:
Looking at the rational module, I think you're right: it really shouldn't be too hard to get quads working as a user type using gcc's __float128 type, which will provide hardware arithmetic in the unlikely case that the user has hardware quads. Alternatively, probably more work, one could use a package like qd to provide portable quad precision (and quad-double).
In this vague area, and further to a question I asked a while ago on StackOverflow (http://stackoverflow.com/q/9062562/709852), is there some deep reason why on some platforms longdouble is float128 and on other it's float96?
Long double is not standardized (like single/double are), so it is CPU dependent. On Intel CPU, long double is generally translated into the extended precision 80 bits. On 32 bits, it is aligned to 12 bytes (the next multiple of 4 bytes), on 64 bits 16 bytes (the next multiple 8 bytes). MS compilers are a notable exception where long double == double.
So it depends on the CPU, the OS and the compiler. Using long double for anything else than compatibility (e.g. binary files) is often a mistake IMO, and highly unportable.
Interesting. So long double consistently maps to the platform specific long double? With my work on https://github.com/hgomersall/pyFFTW, which supports long double as one of the data types, numpy's long double is absolutely the right way to do this. Certainly I've managed reasonable success across the three main OSs with this approach. Cheers, Henry
On Mon, Jun 10, 2013 at 7:49 AM, Henry Gomersall <heng@cantab.net> wrote:
On Sun, 2013-06-09 at 12:23 +0100, David Cournapeau wrote:
So it depends on the CPU, the OS and the compiler. Using long double for anything else than compatibility (e.g. binary files) is often a mistake IMO, and highly unportable.
Interesting. So long double consistently maps to the platform specific long double?
numpy's longdouble dtype should consistently map to the C "long double" type used by the compiler that was used to build numpy (not necessarily the same as the one used to build any other extension!).
With my work on https://github.com/hgomersall/pyFFTW, which supports long double as one of the data types, numpy's long double is absolutely the right way to do this. Certainly I've managed reasonable success across the three main OSs with this approach.
It's certainly the best option in that situation, but you may run into problems with people using different compilers to build pyFFTW than what was used to build the numpy binary they have installed. -- Robert Kern
On Mon, 2013-06-10 at 13:21 +0100, Robert Kern wrote:
With my work on https://github.com/hgomersall/pyFFTW, which supports long double as one of the data types, numpy's long double is absolutely the right way to do this. Certainly I've managed reasonable success across the three main OSs with this approach.
It's certainly the best option in that situation, but you may run into problems with people using different compilers to build pyFFTW than what was used to build the numpy binary they have installed.
No doubt, though that problem goes deeper than my wrappers. It's something I'm aware of at least! Cheers, Henry
On 9 June 2013 13:23, David Cournapeau <cournape@gmail.com> wrote:
So it depends on the CPU, the OS and the compiler. Using long double for anything else than compatibility (e.g. binary files) is often a mistake IMO, and highly unportable.
Now this I have to comment on. Long double is highly questionable for compatibility, especially with binary files, for the reasons you describe. But if what you want is something like double but with a few extra decimal places and hardware support, long double is exactly what you want. Portably! Some platforms will fail to give you any extra decimal places, and in-memory size may differ, but you can be pretty confident that it'll give you approximately what you're looking for anywhere. I am using it right now (and in fact one of the standard tools, tempo2 does this also) for handling pulsar pulse arrival times. These are expressed in Modified Julian Days, so typically are of order 50000, but we care about nanosecond-level differences in these numbers (and yes, there are pulsars where you can track every rotation for ten thousand days to less than a microsecond). Doubles give you quarter-microsecond roundoff error on every operation. Long doubles are better, though not enough for me to be really comfortable. Hence pining after quads. Anne
On Wed, Jun 12, 2013 at 7:55 PM, Anne Archibald <archibald@astron.nl> wrote:
On 9 June 2013 13:23, David Cournapeau <cournape@gmail.com> wrote:
So it depends on the CPU, the OS and the compiler. Using long double for anything else than compatibility (e.g. binary files) is often a mistake IMO, and highly unportable.
Now this I have to comment on. Long double is highly questionable for compatibility, especially with binary files, for the reasons you describe.
By compatibiliy, I meant binary compatibility: if you have some binary files or some API/ABI requiring long double, then using long double is a requirement. Typical example, using structured dtypes to read some binary headers with long double in it.
But if what you want is something like double but with a few extra decimal places and hardware support, long double is exactly what you want. Portably! Some platforms will fail to give you any extra decimal places, and in-memory size may differ, but you can be pretty confident that it'll give you approximately what you're looking for anywhere.
Sure, if you want to have "at least double precision" behavior, it gives you that. I fail to see how it could be useful in a portable context if you actually require that extra precision: if it is ok to fall back on double, why not using double in the first place ? David
participants (5)
-
Anne Archibald
-
Charles R Harris
-
David Cournapeau
-
Henry Gomersall
-
Robert Kern