Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Thanks for any info.
Best wishes, Mike
[0] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4610935&tag=1
On Wed, Sep 8, 2010 at 10:26, Michael Gilbert michael.s.gilbert@gmail.com wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
No, there are no plans. Although IEEE 754-2008 defines the format and semantics of such numbers, they do not provide an implementation. If someone wishes to make a C implementation and integrate that into numpy as a new dtype, I think that would be welcome.
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't great.
What is your particular interest in decimal support?
Chuck
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't great.
What is your particular interest in decimal support?
Primarily avoiding catastrophic cancellation when subtracting large values. I was planning to use the decimal class, but was curious whether support for the IEEE standard was coming any time soon.
Thanks, Mike
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <
michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't
great.
What is your particular interest in decimal support?
Primarily avoiding catastrophic cancellation when subtracting large values. I was planning to use the decimal class, but was curious whether support for the IEEE standard was coming any time soon.
If you just need more precision, mpmath has better performance than the Decimal class. Also, it might be possible to avoid the loss of precision by changing the computation, but I don't know the details of what you are doing.
Chuck
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't great.
What is your particular interest in decimal support?
Primarily avoiding catastrophic cancellation when subtracting large values. I was planning to use the decimal class, but was curious whether support for the IEEE standard was coming any time soon.
If you just need more precision, mpmath has better performance than the Decimal class. Also, it might be possible to avoid the loss of precision by changing the computation, but I don't know the details of what you are doing.
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers. It would be really nice to have IEEE decimal floating point since python's decimal module is a bit slow, but I can see thats going to be a very long process.
Thanks again for your help!
Mike
On Wed, Sep 8, 2010 at 14:44, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't great.
What is your particular interest in decimal support?
Primarily avoiding catastrophic cancellation when subtracting large values. I was planning to use the decimal class, but was curious whether support for the IEEE standard was coming any time soon.
If you just need more precision, mpmath has better performance than the Decimal class. Also, it might be possible to avoid the loss of precision by changing the computation, but I don't know the details of what you are doing.
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers.
Are you sure? Unless if I'm failing to think through this properly, catastrophic cancellation for large numbers is an intrinsic property of fixed-precision floating point regardless of the base. decimal and mpmath both help with that problem because they have arbitrary precision.
On Wed, 8 Sep 2010 15:04:17 -0500, Robert Kern wrote:
On Wed, Sep 8, 2010 at 14:44, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't great.
What is your particular interest in decimal support?
Primarily avoiding catastrophic cancellation when subtracting large values. I was planning to use the decimal class, but was curious whether support for the IEEE standard was coming any time soon.
If you just need more precision, mpmath has better performance than the Decimal class. Also, it might be possible to avoid the loss of precision by changing the computation, but I don't know the details of what you are doing.
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers.
Are you sure? Unless if I'm failing to think through this properly, catastrophic cancellation for large numbers is an intrinsic property of fixed-precision floating point regardless of the base. decimal and mpmath both help with that problem because they have arbitrary precision.
Here is an example:
0.3/3.0 - 0.1
-1.3877787807814457e-17
mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' )
mpf('-1.3877787807814457e-17')
decimal.Decimal( '0.3' )/decimal.Decimal( '3.0' ) - decimal.Decimal ( '0.1' )
Decimal("0.0")
Decimal solves the problem; whereas mpmath doesn't.
Mike
On Wed, Sep 8, 2010 at 22:10, Michael Gilbert michael.s.gilbert@gmail.com wrote:
Here is an example:
>>> 0.3/3.0 - 0.1 -1.3877787807814457e-17
>>> mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' ) mpf('-1.3877787807814457e-17')
>>> decimal.Decimal( '0.3' )/decimal.Decimal( '3.0' ) - decimal.Decimal ( '0.1' ) Decimal("0.0")
Decimal solves the problem; whereas mpmath doesn't.
you can change mpmath precision up to an arbitrary high value:
In [4]: mpmath.mp.prec = 100
In [5]: mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' ) Out[5]: mpf('0.0')
Regards.
On Wed, 8 Sep 2010 22:20:30 +0200, Sandro Tosi wrote:
On Wed, Sep 8, 2010 at 22:10, Michael Gilbert michael.s.gilbert@gmail.com wrote:
Here is an example:
>>> 0.3/3.0 - 0.1 -1.3877787807814457e-17
>>> mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' ) mpf('-1.3877787807814457e-17')
>>> decimal.Decimal( '0.3' )/decimal.Decimal( '3.0' ) - decimal.Decimal ( '0.1' ) Decimal("0.0")
Decimal solves the problem; whereas mpmath doesn't.
you can change mpmath precision up to an arbitrary high value:
In [4]: mpmath.mp.prec = 100
In [5]: mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' ) Out[5]: mpf('0.0')
Thanks. I already knew that. I tried prec = 500, which did of course get the error down quite a bit to begin with, but even so, it continued to grow and grow. So decimal wins out.
Mike
On Wed, Sep 8, 2010 at 15:10, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, 8 Sep 2010 15:04:17 -0500, Robert Kern wrote:
On Wed, Sep 8, 2010 at 14:44, Michael Gilbert michael.s.gilbert@gmail.com wrote:
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers.
Are you sure? Unless if I'm failing to think through this properly, catastrophic cancellation for large numbers is an intrinsic property of fixed-precision floating point regardless of the base. decimal and mpmath both help with that problem because they have arbitrary precision.
Here is an example:
>>> 0.3/3.0 - 0.1 -1.3877787807814457e-17
>>> mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' ) mpf('-1.3877787807814457e-17')
>>> decimal.Decimal( '0.3' )/decimal.Decimal( '3.0' ) - decimal.Decimal ( '0.1' ) Decimal("0.0")
Decimal solves the problem; whereas mpmath doesn't.
Okay, that's not an example of catastrophic cancellation, just a representation issue.
On 8 September 2010 16:33, Robert Kern robert.kern@gmail.com wrote:
On Wed, Sep 8, 2010 at 15:10, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, 8 Sep 2010 15:04:17 -0500, Robert Kern wrote:
On Wed, Sep 8, 2010 at 14:44, Michael Gilbert michael.s.gilbert@gmail.com wrote:
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers.
Are you sure? Unless if I'm failing to think through this properly, catastrophic cancellation for large numbers is an intrinsic property of fixed-precision floating point regardless of the base. decimal and mpmath both help with that problem because they have arbitrary precision.
Here is an example:
0.3/3.0 - 0.1
-1.3877787807814457e-17
mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' )
mpf('-1.3877787807814457e-17')
decimal.Decimal( '0.3' )/decimal.Decimal( '3.0' ) - decimal.Decimal ( '0.1' )
Decimal("0.0")
Decimal solves the problem; whereas mpmath doesn't.
Okay, that's not an example of catastrophic cancellation, just a representation issue.
Indeed - and as I understand it the motivation for decimal numbers is not that they suffer less from roundoff than binary, but because the round-off they suffer from is better suited to (for example) financial applications, where representing exact decimal numbers can be important.
If your problem is the fact of roundoff error, try using as your test case, taking the square root of two and squaring it again. This will suffer from the same sort of roundoff problems in decimal as binary.
Anne
-- Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, 8 Sep 2010 15:44:02 -0400, Michael Gilbert wrote:
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't great.
What is your particular interest in decimal support?
Primarily avoiding catastrophic cancellation when subtracting large values. I was planning to use the decimal class, but was curious whether support for the IEEE standard was coming any time soon.
If you just need more precision, mpmath has better performance than the Decimal class. Also, it might be possible to avoid the loss of precision by changing the computation, but I don't know the details of what you are doing.
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers. It would be really nice to have IEEE decimal floating point since python's decimal module is a bit slow, but I can see thats going to be a very long process.
FYI, gcc 4.2 introduced initial support for decimal floats. However, various macros like DEC32_MAX, and printf support for dd, df, and dl is not yet implemented making it rather hard to work with.
Mike
On Wed, Sep 8, 2010 at 5:35 PM, Michael Gilbert wrote:
On Wed, 8 Sep 2010 15:44:02 -0400, Michael Gilbert wrote:
On Wed, Sep 8, 2010 at 12:23 PM, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:46 AM, Michael Gilbert michael.s.gilbert@gmail.com wrote:
On Wed, 8 Sep 2010 09:43:56 -0600, Charles R Harris wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert <michael.s.gilbert@gmail.com
wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware or C support and adding new types to numpy isn't trivial. You can get some limited Decimal functionality by using python classes and object arrays, for instance the Decimal class in the python decimal module, but the performance isn't great.
What is your particular interest in decimal support?
Primarily avoiding catastrophic cancellation when subtracting large values. I was planning to use the decimal class, but was curious whether support for the IEEE standard was coming any time soon.
If you just need more precision, mpmath has better performance than the Decimal class. Also, it might be possible to avoid the loss of precision by changing the computation, but I don't know the details of what you are doing.
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers. It would be really nice to have IEEE decimal floating point since python's decimal module is a bit slow, but I can see thats going to be a very long process.
FYI, gcc 4.2 introduced initial support for decimal floats. However, various macros like DEC32_MAX, and printf support for dd, df, and dl is not yet implemented making it rather hard to work with.
A reference would probably be useful: http://gcc.gnu.org/onlinedocs/gcc-4.2.4/gcc/Decimal-Float.html
As with all gcc documentation, its rather terse :/
Best wishes, Mike
On Thu, Sep 9, 2010 at 12:43 AM, Charles R Harris charlesr.harris@gmail.com wrote:
On Wed, Sep 8, 2010 at 9:26 AM, Michael Gilbert michael.s.gilbert@gmail.com wrote:
Hi,
Are there any plans to add support for decimal floating point arithmetic, as defined in the 2008 revision of the IEEE 754 standard [0], in numpy?
Not at the moment. There is currently no hardware
Strictly speaking, there is hardware support on some high end hardware (power6, but I have no idea how this is available through common languages, like C or Fortran. I guess this would be compiler specific.
A software implementation would be much more interesting at this stage, as I guess it will takes years before support comes to commodity CPUs.
cheers,
David