Fast vectorized arithmetic with ~32 significant digits under Numpy
From time to time it is asked on forums how to extend precision of computation on Numpy array. The most common answer given to this question is: use the dtype=object with some arbitrary precision module like mpmath or gmpy. See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost. With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such precision is suitable. I often implemented it partially under Numpy, most of the time by trying to vectorize at a low-level the libqd library. But I recently thought that a very nice and portable way of implementing it under Numpy would be to use the existing layer of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971). A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it). I would like to discuss about the way to make available something related to that. a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on the other hand I am pretty sure that many would be happy to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I think it would be nice to have it quickly available in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)? b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy? Best regards, -- Thomas Baruchel
On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
From time to time it is asked on forums how to extend precision of computation on Numpy array. The most common answer given to this question is: use the dtype=object with some arbitrary precision module like mpmath or gmpy. See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such precision is suitable.
I often implemented it partially under Numpy, most of the time by trying to vectorize at a low-level the libqd library.
But I recently thought that a very nice and portable way of implementing it under Numpy would be to use the existing layer of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d
I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971).
A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it).
I would like to discuss about the way to make available something related to that.
a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on the other hand I am pretty sure that many would be happy to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I think it would be nice to have it quickly available in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)?
b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy?
I think astropy does something similar for time and dates. There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available packages were GPL :(. Chuck
There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available packages were GPL :(.
This looks like it's BSD-Ish: http://www.jhauser.us/arithmetic/SoftFloat.html Don't know if it's any good.... CHB
Chuck _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
Actually, GCC implements 128-bit floats in software and provides them as
__float128; there are also quad-precision versions of the usual functions.
The Intel compiler provides this as well, I think, but I don't think
Microsoft compilers do. A portable quad-precision library might be less
painful.
The cleanest way to add extended precision to numpy is by adding a
C-implemented dtype. This can be done in an extension module; see the
quaternion and half-precision modules online.
Anne
On Fri, Dec 11, 2015, 16:46 Charles R Harris
On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
wrote: From time to time it is asked on forums how to extend precision of computation on Numpy array. The most common answer given to this question is: use the dtype=object with some arbitrary precision module like mpmath or gmpy. See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such precision is suitable.
I often implemented it partially under Numpy, most of the time by trying to vectorize at a low-level the libqd library.
But I recently thought that a very nice and portable way of implementing it under Numpy would be to use the existing layer of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d
I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971).
A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it).
I would like to discuss about the way to make available something related to that.
a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on the other hand I am pretty sure that many would be happy to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I think it would be nice to have it quickly available in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)?
b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy?
I think astropy does something similar for time and dates. There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available packages were GPL :(.
Chuck _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Fri, Dec 11, 2015 at 11:22 AM, Anne Archibald
Actually, GCC implements 128-bit floats in software and provides them as __float128; there are also quad-precision versions of the usual functions. The Intel compiler provides this as well, I think, but I don't think Microsoft compilers do. A portable quad-precision library might be less painful.
The cleanest way to add extended precision to numpy is by adding a C-implemented dtype. This can be done in an extension module; see the quaternion and half-precision modules online.
Anne
On Fri, Dec 11, 2015, 16:46 Charles R Harris
wrote: On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
wrote: From time to time it is asked on forums how to extend precision of computation on Numpy array. The most common answer given to this question is: use the dtype=object with some arbitrary precision module like mpmath or gmpy. See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such precision is suitable.
I often implemented it partially under Numpy, most of the time by trying to vectorize at a low-level the libqd library.
But I recently thought that a very nice and portable way of implementing it under Numpy would be to use the existing layer of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d
I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971).
A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it).
I would like to discuss about the way to make available something related to that.
a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on the other hand I am pretty sure that many would be happy to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I think it would be nice to have it quickly available in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)?
b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy?
I think astropy does something similar for time and dates. There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available packages were GPL :(.
This might be the same as or similar to a recent announcement for Julia https://groups.google.com/d/msg/julia-users/iHTaxRVj1yM/M-WtZCedCQAJ It would be useful to get this in a consistent way across platforms and compilers. I can think of several applications where higher precision reduce operations would be useful in statistics. As Windows user, I never even saw a higher precision float. Josef
Chuck _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Fri, Dec 11, 2015 at 4:22 PM, Anne Archibald
Actually, GCC implements 128-bit floats in software and provides them as __float128; there are also quad-precision versions of the usual functions. The Intel compiler provides this as well, I think, but I don't think Microsoft compilers do. A portable quad-precision library might be less painful.
The cleanest way to add extended precision to numpy is by adding a C-implemented dtype. This can be done in an extension module; see the quaternion and half-precision modules online.
We actually used __float128 dtype as an example of how to create a custom dtype for a numpy C tutorial we did w/ Stefan Van der Walt a few years ago at SciPy. IIRC, one of the issue to make it more than a PoC was that numpy hardcoded things like long double being the higest precision, etc... But that may has been fixed since then. David
Anne
On Fri, Dec 11, 2015, 16:46 Charles R Harris
wrote: On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
wrote: From time to time it is asked on forums how to extend precision of computation on Numpy array. The most common answer given to this question is: use the dtype=object with some arbitrary precision module like mpmath or gmpy. See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such precision is suitable.
I often implemented it partially under Numpy, most of the time by trying to vectorize at a low-level the libqd library.
But I recently thought that a very nice and portable way of implementing it under Numpy would be to use the existing layer of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d
I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971).
A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it).
I would like to discuss about the way to make available something related to that.
a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on the other hand I am pretty sure that many would be happy to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I think it would be nice to have it quickly available in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)?
b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy?
I think astropy does something similar for time and dates. There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available packages were GPL :(.
Chuck _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Fri, Dec 11, 2015, 18:04 David Cournapeau
On Dec 11, 2015 7:46 AM, "Charles R Harris"
On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
wrote: From time to time it is asked on forums how to extend precision of
given to this question is: use the dtype=object with some arbitrary
See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such
I often implemented it partially under Numpy, most of the time by trying
to vectorize at a low-level the libqd library.
But I recently thought that a very nice and portable way of implementing
it under Numpy would be to use the existing layer
of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d
I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971).
A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it).
I would like to discuss about the way to make available something related to that.
a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on the other hand I am pretty sure that many would be happy to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I
computation on Numpy array. The most common answer precision module like mpmath or gmpy. precision is suitable. think it would be nice to have it quickly available
in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)?
b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy?
I think astropy does something similar for time and dates. There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available packages were GPL :(.
You're probably thinking of the __float128 support in gcc, which relies on a LGPL (not GPL) runtime support library. (LGPL = any patches to the support library itself need to remain open source, but no restrictions are imposed on code that merely uses it.) Still, probably something that should be done outside of numpy itself for now. -n
On Fri, Dec 11, 2015 at 10:45 AM, Nathaniel Smith
On Dec 11, 2015 7:46 AM, "Charles R Harris"
wrote: On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
From time to time it is asked on forums how to extend precision of
computation on Numpy array. The most common answer
given to this question is: use the dtype=object with some arbitrary
See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such
I often implemented it partially under Numpy, most of the time by
But I recently thought that a very nice and portable way of
implementing it under Numpy would be to use the existing layer
of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d
I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971).
A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it).
I would like to discuss about the way to make available something related to that.
a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on
to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I
in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)?
b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy?
I think astropy does something similar for time and dates. There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available
wrote: precision module like mpmath or gmpy. precision is suitable. trying to vectorize at a low-level the libqd library. the other hand I am pretty sure that many would be happy think it would be nice to have it quickly available packages were GPL :(.
You're probably thinking of the __float128 support in gcc, which relies on a LGPL (not GPL) runtime support library. (LGPL = any patches to the support library itself need to remain open source, but no restrictions are imposed on code that merely uses it.)
Still, probably something that should be done outside of numpy itself for now.
No, there are several other software packages out there. I know of the gcc version, but was looking for something more portable. Chuck
I have a mostly complete wrapping of the double-double type from the QD library (http://crd-legacy.lbl.gov/~dhbailey/mpdist/) into a numpy dtype. The real problem is, as david pointed out, user dtypes aren't quite full equivalents of the builtin dtypes. I can post the code if there is interest. Something along the lines of what's being discussed here would be nice, since the extended type is subject to such variation. Eric On Fri, Dec 11, 2015 at 12:51 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Fri, Dec 11, 2015 at 10:45 AM, Nathaniel Smith
wrote: On Dec 11, 2015 7:46 AM, "Charles R Harris"
wrote: On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
From time to time it is asked on forums how to extend precision of
computation on Numpy array. The most common answer
given to this question is: use the dtype=object with some arbitrary
See http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-... or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath or http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is
will be lost.
With years I got very familiar with the extended double-double type which has (for usual architectures) about 32 accurate digits with faster arithmetic than "arbitrary precision types". I even used it for research purpose in number theory and I got convinced that it is a very wonderful type as long as such
I often implemented it partially under Numpy, most of the time by
But I recently thought that a very nice and portable way of
implementing it under Numpy would be to use the existing layer
of vectorization on floats for computing the arithmetic operations by "columns containing half of the numbers" rather than by "full numbers". As a proof of concept I wrote the following file: https://gist.github.com/baruchel/c86ed748939534d8910d
I converted and vectorized the Algol 60 codes from http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf (Dekker, 1971).
A test is provided at the end; for inverting 100,000 numbers, my type is about 3 or 4 times faster than GMPY and almost 50 times faster than MPmath. It should be even faster for some other operations since I had to create another np.ones array for testing this type because inversion isn't implemented here (which could of course be done). You can run this file by yourself (maybe you will have to discard mpmath or gmpy if you don't have it).
I would like to discuss about the way to make available something related to that.
a) Would it be relevant to include that in Numpy ? (I would think to some "contribution"-tool rather than including it in the core of Numpy because it would be painful to code all ufuncs; on
to perform several arithmetic operations by knowing that they can't use cos/sin/etc. on this type; in other words, I am not sure it would be a good idea to embed it as an every-day type but I
in some way). If you agree with that, in which way should I code it (the current link only is a "proof of concept"; I would be very happy to code it in some cleaner way)?
b) Do you think such attempt should remain something external to Numpy itself and be released on my Github account without being integrated to Numpy?
I think astropy does something similar for time and dates. There has also been some talk of adding a user type for ieee 128 bit doubles. I've looked once for relevant code for the latter and, IIRC, the available
wrote: precision module like mpmath or gmpy. that from some point of view "true" vectorization precision is suitable. trying to vectorize at a low-level the libqd library. the other hand I am pretty sure that many would be happy think it would be nice to have it quickly available packages were GPL :(.
You're probably thinking of the __float128 support in gcc, which relies on a LGPL (not GPL) runtime support library. (LGPL = any patches to the support library itself need to remain open source, but no restrictions are imposed on code that merely uses it.)
Still, probably something that should be done outside of numpy itself for now.
No, there are several other software packages out there. I know of the gcc version, but was looking for something more portable.
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
"Thomas Baruchel"
While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
What does "true vectorization" mean anyway? Sturla
Hi All,
astropy `Time` indeed using two doubles internally, but is very limited in
the operations it allows: essentially only addition/subtraction, and
multiplication with/division by a normal double.
It would be great to have better support within numpy; it is a pity to have
a float128 type that does not provide the full associated precision.
All the best,
Marten
On Sat, Dec 12, 2015 at 1:02 PM, Sturla Molden
"Thomas Baruchel"
wrote: While this is obviously the most relevant answer for many users because it will allow them to use Numpy arrays exactly as they would have used them with native types, the wrong thing is that from some point of view "true" vectorization will be lost.
What does "true vectorization" mean anyway?
Sturla
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion
participants (11)
-
Anne Archibald
-
Charles R Harris
-
Chris Barker - NOAA Federal
-
David Cournapeau
-
Elliot Hallmark
-
Eric Moore
-
josef.pktd@gmail.com
-
Marten van Kerkwijk
-
Nathaniel Smith
-
Sturla Molden
-
Thomas Baruchel