![](https://secure.gravatar.com/avatar/27110ee17547a3892984b0c2e94c9170.jpg?s=120&d=mm&r=g)
I am completely new to Numpy and I know only the basics of Python, to this point I was using Fortran 03/08 to write numerical code. However, I am starting a new large project of mine and I am looking forward to using Python to call some low level Fortran code responsible for most of the intensive number crunching. In this context I stumbled into f2py and it looks just like what I need, but before I start writing an app in mixture of Python and Fortran I have a question about numerical precision of variables used in numpy and f2py. Is there any way to interact with Fortran's real(16) (supported by gcc and Intel's ifort) data type from numpy? By real(16) I mean the binary128 type as in IEEE 754. (In C this data type is experimentally supported as __float128 (gcc) and _Quad (Intel's icc).) I have investigated the float128 data type, but it seems to work as binary64 or binary80 depending on the architecture. If there is currently no way to interact with binary128, how hard would it be to patch the sources of numpy to add such data type? I am interested only in basic stuff, comparable in functionality to libmath. As said before, I have little knowledge of Python, Numpy and f2py, I am however, interested in investing some time in learing it and implementing the mentioned features, but only if there is any hope of succeeding.
![](https://secure.gravatar.com/avatar/3a8105539272345aef5c305505f2e2a7.jpg?s=120&d=mm&r=g)
Thanks to your question, I discovered that there is a float128 dtype in numpy In[5]: np.__version__ Out[5]: '1.6.1' In[6]: np.float128? Type: type Base Class: <type 'type'> String Form:<type 'numpy.float128'> Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/site-packages/numpy/__init__.py Docstring: 128-bit floating-point number. Character code: 'g'. C long float compatible. Based on some reported issues, it seems like there are issues though with this and its mapping to python long integer... http://mail.scipy.org/pipermail/numpy-discussion/2011-October/058784.html HTH, Jonathan On Wed, Feb 29, 2012 at 9:22 AM, Paweł Biernat <pwl_b@wp.pl> wrote:
I am completely new to Numpy and I know only the basics of Python, to this point I was using Fortran 03/08 to write numerical code. However, I am starting a new large project of mine and I am looking forward to using Python to call some low level Fortran code responsible for most of the intensive number crunching. In this context I stumbled into f2py and it looks just like what I need, but before I start writing an app in mixture of Python and Fortran I have a question about numerical precision of variables used in numpy and f2py.
Is there any way to interact with Fortran's real(16) (supported by gcc and Intel's ifort) data type from numpy? By real(16) I mean the binary128 type as in IEEE 754. (In C this data type is experimentally supported as __float128 (gcc) and _Quad (Intel's icc).) I have investigated the float128 data type, but it seems to work as binary64 or binary80 depending on the architecture. If there is currently no way to interact with binary128, how hard would it be to patch the sources of numpy to add such data type? I am interested only in basic stuff, comparable in functionality to libmath.
As said before, I have little knowledge of Python, Numpy and f2py, I am however, interested in investing some time in learing it and implementing the mentioned features, but only if there is any hope of succeeding.
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Jonathan Rocher, PhD Scientific software developer Enthought, Inc. jrocher@enthought.com 1-512-536-1057 http://www.enthought.com
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
Hi, On Wed, Feb 29, 2012 at 12:13 PM, Jonathan Rocher <jrocher@enthought.com> wrote:
Thanks to your question, I discovered that there is a float128 dtype in numpy
In[5]: np.__version__ Out[5]: '1.6.1'
In[6]: np.float128? Type: type Base Class: <type 'type'> String Form:<type 'numpy.float128'> Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/site-packages/numpy/__init__.py Docstring: 128-bit floating-point number. Character code: 'g'. C long float compatible.
Right - but remember that numpy float128 is different on different platforms. In particular, float128 is any C longdouble type that needs 128 bits of memory, regardless of precision or implementation. See [1] for background on C longdouble type. The numpy platforms I know about are: Intel : 80 bit float padded to 128 bits [2] PPC : pair of float64 values [3] Debian IBM s390 : real quadruple precision [4] [5] I see that some Sun machines implement real quadruple precision in software but I haven't run numpy on a Sun machine [6] [1] http://en.wikipedia.org/wiki/Long_double [2] http://en.wikipedia.org/wiki/Extended_precision#x86_Architecture_Extended_Pr... [3] http://en.wikipedia.org/wiki/Double-double_%28arithmetic%29#Double-double_ar... [4] http://en.wikipedia.org/wiki/Double-double_%28arithmetic%29#IEEE_754_quadrup... [5] https://github.com/nipy/nibabel/issues/76 [6] http://en.wikipedia.org/wiki/Double-double_%28arithmetic%29#Implementations
Based on some reported issues, it seems like there are issues though with this and its mapping to python long integer... http://mail.scipy.org/pipermail/numpy-discussion/2011-October/058784.html
I tried to summarize the problems I knew about here: http://mail.scipy.org/pipermail/numpy-discussion/2011-November/059087.html There are some routines to deal with some of the problems here: https://github.com/nipy/nibabel/blob/master/nibabel/casting.py After spending some time with the various longdoubles in numpy, I have learned to stare at my code for a long time considering how it might run into the various problems above. Best, Matthew
![](https://secure.gravatar.com/avatar/59bdb3784070f0a6836aca9ee03ad817.jpg?s=120&d=mm&r=g)
On Wed, Feb 29, 2012 at 10:22 AM, Paweł Biernat <pwl_b@wp.pl> wrote:
I am completely new to Numpy and I know only the basics of Python, to this point I was using Fortran 03/08 to write numerical code. However, I am starting a new large project of mine and I am looking forward to using Python to call some low level Fortran code responsible for most of the intensive number crunching. In this context I stumbled into f2py and it looks just like what I need, but before I start writing an app in mixture of Python and Fortran I have a question about numerical precision of variables used in numpy and f2py.
Is there any way to interact with Fortran's real(16) (supported by gcc and Intel's ifort) data type from numpy? By real(16) I mean the binary128 type as in IEEE 754. (In C this data type is experimentally supported as __float128 (gcc) and _Quad (Intel's icc).) I have investigated the float128 data type, but it seems to work as binary64 or binary80 depending on the architecture. If there is currently no way to interact with binary128, how hard would it be to patch the sources of numpy to add such data type? I am interested only in basic stuff, comparable in functionality to libmath.
As said before, I have little knowledge of Python, Numpy and f2py, I am however, interested in investing some time in learing it and implementing the mentioned features, but only if there is any hope of succeeding.
Numpy does not have proper support for the quadruple precision float numbers, because very few implementation do (no common CPU handle it in hw, for example). The dtype128 is a bit confusingly named: the 128 refers to the padding in memory, but not its "real" precision. It often (but not always) refer to the long double in the underlying C implementation. The latter depends on the OS, CPU and compilers. cheers, David
![](https://secure.gravatar.com/avatar/f14373168edcc55c5f2598a40de55c0d.jpg?s=120&d=mm&r=g)
Hi, Le 29/02/2012 16:22, Paweł Biernat a écrit :
Is there any way to interact with Fortran's real(16) (supported by gcc and Intel's ifort) data type from numpy? By real(16) I mean the binary128 type as in IEEE 754. (In C this data type is experimentally supported as __float128 (gcc) and _Quad (Intel's icc).) I googled a bit this "__float128". It seems a fairly new addition (GCC 4.6, released March 2011). The related point in the changelog [1] is :
"GCC now ships with the LGPL-licensed libquadmath library, which provides quad-precision mathematical functions for targets with a __float128 datatype. __float128 is available for targets on 32-bit x86, x86-64 and Itanium architectures. The libquadmath library is automatically built on such targets when building the Fortran compiler." It seems this __float128 is newcomer in the "picture of data types" that Matthew just mentioned. As David says, arithmetic with such a 128 bits data type is probably not "hardwired" in most processors (I mean Intel & friends) which are limited to 80 bits ("long doubles") so it may be a bit slow. However, this GCC implementation with libquadmath seems to create some level of abstraction. Maybe this is one acceptably good way for a real "IEEE float 128" dtype in numpy ? Best, Pierre [1] http://gcc.gnu.org/gcc-4.6/changes.html
![](https://secure.gravatar.com/avatar/38153b4768acea6b89aed9f19a0a5243.jpg?s=120&d=mm&r=g)
On Feb 29, 2012, at 11:52 AM, Pierre Haessig wrote:
Hi,
Le 29/02/2012 16:22, Paweł Biernat a écrit :
Is there any way to interact with Fortran's real(16) (supported by gcc and Intel's ifort) data type from numpy? By real(16) I mean the binary128 type as in IEEE 754. (In C this data type is experimentally supported as __float128 (gcc) and _Quad (Intel's icc).) I googled a bit this "__float128". It seems a fairly new addition (GCC 4.6, released March 2011). The related point in the changelog [1] is :
"GCC now ships with the LGPL-licensed libquadmath library, which provides quad-precision mathematical functions for targets with a __float128 datatype. __float128 is available for targets on 32-bit x86, x86-64 and Itanium architectures. The libquadmath library is automatically built on such targets when building the Fortran compiler."
Great find!
It seems this __float128 is newcomer in the "picture of data types" that Matthew just mentioned. As David says, arithmetic with such a 128 bits data type is probably not "hardwired" in most processors (I mean Intel & friends) which are limited to 80 bits ("long doubles") so it may be a bit slow. However, this GCC implementation with libquadmath seems to create some level of abstraction. Maybe this is one acceptably good way for a real "IEEE float 128" dtype in numpy ?
That would be really nice. The problem here is two-folded: * Backwards-compatibility. float128 should represent a different data-type than before, so we probably should find a new name (and charcode!) for quad-precision. Maybe quad128? * Compiler-dependency. The new type will be only available on platforms that has GCC 4.6 or above. Again, using the new name for this should be fine. On platforms/compilers not supporting the quad128 thing, it should not be defined. Uh, I foresee many portability problems for people using this, but perhaps it is worth the mess. -- Francesc Alted
![](https://secure.gravatar.com/avatar/96dd777e397ab128fedab46af97a3a4a.jpg?s=120&d=mm&r=g)
On Wed, Feb 29, 2012 at 1:09 PM, Francesc Alted <francesc@continuum.io>wrote:
On Feb 29, 2012, at 11:52 AM, Pierre Haessig wrote:
Hi,
Le 29/02/2012 16:22, Paweł Biernat a écrit :
Is there any way to interact with Fortran's real(16) (supported by gcc and Intel's ifort) data type from numpy? By real(16) I mean the binary128 type as in IEEE 754. (In C this data type is experimentally supported as __float128 (gcc) and _Quad (Intel's icc).) I googled a bit this "__float128". It seems a fairly new addition (GCC 4.6, released March 2011). The related point in the changelog [1] is :
"GCC now ships with the LGPL-licensed libquadmath library, which provides quad-precision mathematical functions for targets with a __float128 datatype. __float128 is available for targets on 32-bit x86, x86-64 and Itanium architectures. The libquadmath library is automatically built on such targets when building the Fortran compiler."
Great find!
It seems this __float128 is newcomer in the "picture of data types" that Matthew just mentioned. As David says, arithmetic with such a 128 bits data type is probably not "hardwired" in most processors (I mean Intel & friends) which are limited to 80 bits ("long doubles") so it may be a bit slow. However, this GCC implementation with libquadmath seems to create some level of abstraction. Maybe this is one acceptably good way for a real "IEEE float 128" dtype in numpy ?
That would be really nice. The problem here is two-folded:
* Backwards-compatibility. float128 should represent a different data-type than before, so we probably should find a new name (and charcode!) for quad-precision. Maybe quad128?
* Compiler-dependency. The new type will be only available on platforms that has GCC 4.6 or above. Again, using the new name for this should be fine. On platforms/compilers not supporting the quad128 thing, it should not be defined.
Uh, I foresee many portability problems for people using this, but perhaps it is worth the mess.
The quad precision library has been there for a while, and quad precision is also supported by the Intel compiler. I don't know about MSVC. Intel has been working on adding quad precision to their hardware for several years and there is an IEEE spec for it, so some day it will be here, but it isn't here yet. It's a bit sad, I could use quad precision in FORTRAN on a VAX 25 years ago. Mind, I only needed it once ;) I suppose lack of pressing need accounts for the delay. Chuck
![](https://secure.gravatar.com/avatar/27110ee17547a3892984b0c2e94c9170.jpg?s=120&d=mm&r=g)
Charles R Harris <charlesr.harris <at> gmail.com> writes:
The quad precision library has been there for a while, and quad
precision is also supported by the Intel compiler. I don't know about MSVC. Intel has been working on adding quad precision to their hardware for several years and there is an IEEE spec for it, so some day it will be here, but it isn't here yet. It's a bit sad, I could use quad precision in FORTRAN on a VAX 25 years ago. Mind, I only needed it once ;) I suppose lack of pressing need accounts for the delay.Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion <at> scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Waiting for hardware support can last forever, and __float128 is already here. Despite being software supported, it is still reasonably fast for people who need it. The slow-down depends on a case and optimization and can be roughly from x2 (using sse) to x10 (without optimization), but you gain x2 significant digits when compared to double, see for example http://locklessinc.com/articles/classifying_floats/. This is still faster than mpfr for example. And gcc-4.6 already supports __float128 on a number of machines: i386, x86_64, ia64 and HP-UX. Also fftw now supports binary128: http://www.fftw.org/release-notes.html (although this might not be the most representative numerical software, it confirms that it is unlikely that __float128 will be ignored by the others unless hardware supported). The portability is broken for numpy.float128 anyway (as I understand, it behaves in different ways on different architectures), so adding a new type (call it, say, quad128) that properly supports binary128 shouldn't be a drawback. Later on, when the hardware support for binary128 shows up, the quad128 will be already there. Paweł.
![](https://secure.gravatar.com/avatar/97c543aca1ac7bbcfb5279d0300c8330.jpg?s=120&d=mm&r=g)
On Mar 2, 2012 10:48 AM, "Paweł Biernat" <pwl_b@wp.pl> wrote:
The portability is broken for numpy.float128 anyway (as I understand, it behaves in different ways on different architectures), so adding a new type (call it, say, quad128) that properly supports binary128 shouldn't be a drawback. Later on, when the hardware support for binary128 shows up, the quad128 will be already there.
There's already been movement to deprecate using float128 as the name for machine-specific long doubles. This just gives even more reason. If/when someone adds __float128 support to numpy we should really just call it float128, not quad128. (This would even be backwards compatible, since float128 currently gives no guarantees on precision or representation.) - n
![](https://secure.gravatar.com/avatar/27110ee17547a3892984b0c2e94c9170.jpg?s=120&d=mm&r=g)
Pierre Haessig <pierre.haessig <at> crans.org> writes:
Hi,
Le 29/02/2012 16:22, Paweł Biernat a écrit :
Is there any way to interact with Fortran's real(16) (supported by gcc and Intel's ifort) data type from numpy? By real(16) I mean the binary128 type as in IEEE 754. (In C this data type is experimentally supported as __float128 (gcc) and _Quad (Intel's icc).) I googled a bit this "__float128". It seems a fairly new addition (GCC 4.6, released March 2011). The related point in the changelog [1] is :
"GCC now ships with the LGPL-licensed libquadmath library, which provides quad-precision mathematical functions for targets with a __float128 datatype. __float128 is available for targets on 32-bit x86, x86-64 and Itanium architectures. The libquadmath library is automatically built on such targets when building the Fortran compiler."
It seems this __float128 is newcomer in the "picture of data types" that Matthew just mentioned. As David says, arithmetic with such a 128 bits data type is probably not "hardwired" in most processors (I mean Intel & friends) which are limited to 80 bits ("long doubles") so it may be a bit slow. However, this GCC implementation with libquadmath seems to create some level of abstraction. Maybe this is one acceptably good way for a real "IEEE float 128" dtype in numpy ?
Best, Pierre
[1] http://gcc.gnu.org/gcc-4.6/changes.html
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion <at> scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Intel also has its own implementation of binary128, although not well documented (as you said, it's software emulated, but still quite fast): http://software.intel.com/sites/products/documentation/hpc/compilerpro/en-us... The documentation is for Fortran's real(16), but I belive the same holds for _Quad type in C. My naive question is, is there a way to recompile numpy with "long double" (or just "float128") replaced with "_Quad" or "__float128"? There are at least two compilers that support the respective data types, so this should be doable. I tested interoperability of binary128 with Fortran and C (using gcc and Intel's compilers) and it works like a charm. The only problem that comes to my mind is i/o, because there is no printf format for _Quad or __float128 and fortran routines have to be used to do all i/o. Paweł
participants (8)
-
Charles R Harris
-
David Cournapeau
-
Francesc Alted
-
Jonathan Rocher
-
Matthew Brett
-
Nathaniel Smith
-
Paweł Biernat
-
Pierre Haessig