On Wed, Sep 8, 2010 at 15:10, Michael Gilbert
On Wed, 8 Sep 2010 15:04:17 -0500, Robert Kern wrote:
On Wed, Sep 8, 2010 at 14:44, Michael Gilbert
wrote:
Just wanted to say that numpy object arrays + decimal solved all of my problems, which were all caused by the disconnect between decimal and binary representation of floating point numbers.
Are you sure? Unless if I'm failing to think through this properly, catastrophic cancellation for large numbers is an intrinsic property of fixed-precision floating point regardless of the base. decimal and mpmath both help with that problem because they have arbitrary precision.
Here is an example:
>>> 0.3/3.0 - 0.1 -1.3877787807814457e-17
>>> mpmath.mpf( '0.3' )/mpmath.mpf( '3.0' ) - mpmath.mpf( '0.1' ) mpf('-1.3877787807814457e-17')
>>> decimal.Decimal( '0.3' )/decimal.Decimal( '3.0' ) - decimal.Decimal ( '0.1' ) Decimal("0.0")
Decimal solves the problem; whereas mpmath doesn't.
Okay, that's not an example of catastrophic cancellation, just a representation issue. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco