arbitrary precision linear algebra
Ben123
ben.is.located at gmail.com
Wed Mar 2 12:53:57 EST 2011
On Mar 2, 11:28 am, Robin Becker <ro... at reportlab.com> wrote:
> On 02/03/2011 16:39, Ben123 wrote:
> ...........
>
>
>
>
>
>
>
> >> Languages can't support infinitely large or small numbers, so try to
> >> multiply the inner variables by 10^n to increase their values if this
> >> will not involve on the method. For example, I did this when was
> >> calculating geometric means of computer benchmarks.
>
> > Currently I have values between 1 and 1E-300 (not infinitely small). I
> > don't see how scaling by powers of 10 will increase precision.
>
> >> In such way you will be storing the number of zeros as n.
>
> > Are you saying python cares whether I express a number as 0.001 or
> > scaled by 10^5 to read 100? If this is the case, I'm still stuck. I
> > need the full range of eigenvalues from 1 to 1E-300, so the entire
> > range could be scaled by 1E300 but I would still need better precision
> > than 1E19
>
> .......
>
> If you enter a number as 1e-19 then python will treat as a float; by default I
> think that float is IEEE double precision so you're getting a 48 bit mantissa
> (others may have better details). That means you've already lost any idea of
> arbitrary precision.
>
> When you say you have numbers like 1E-300 are those actually numerically zero or
> have you some valid inputs that vary over a huge range. It should be possible to
> compute determinants/inverses etc to arbitrary precision as those are known to
> be polynomial functions of the input elements. However, eigenvalues are not.
I meant that I expect the eigenvalues to be in a range from 1 to
1E-300. The values in the matrix are from sine and cosine values and
have range [-10,10] for the real and imaginary parts. Thus, I could
specify the matrix elements to arbitrary precision prior to
diagonalization. However, I'm looking for the resulting eigenvalues to
have precision to 1E-300
>
> eg
>
> [0 2]
> [1 0]
>
> has eigenvalues +/- sqrt(2) so even though you can represent the matrix in
> finite precision the eigenvalues require infinite precision.
Here's an example of a matrix I am diagonalizing:
from numpy import *
exmpl=array([[ 9.39582700e-04 +1.21760986e-21j, 3.32958159e-04
-6.03359588e-04j, \
-3.71551527e-04 -1.77143102e-04j, 2.87113322e-04
-9.84374562e-04j], \
[ 3.32958159e-04 +6.03359588e-04j, 5.05441331e-04
+6.80604208e-21j, \
-1.79123354e-05 -3.01368276e-04j, 7.33866807e-04
-1.64459142e-04j], \
[ -3.71551527e-04 +1.77143102e-04j, -1.79123354e-05
+3.01368276e-04j, \
1.80324968e-04 -2.63622461e-21j, 7.20508934e-05
+4.43394732e-04j], \
[ 2.87113322e-04 +9.84374562e-04j, 7.33866807e-04
+1.64459142e-04j, \
7.20508934e-05 -4.43394732e-04j, 1.11903651e-03
-6.61744490e-21j]])
The eigenvalues I get back using linalg.eig(exmpl)[0] are
2.74438550e-03 +7.67536143e-20j
6.54324852e-12 +2.03438929e-20j
1.39471366e-13 +4.68335525e-20j
1.76591707e-12 +2.20243218e-20j])
Here I would want to have better precision in the eigenvalues, a
smaller imaginary component.
To use your example, I'd like to increase the number of digits shown
in the eigenvalue:
from numpy import *
test=array([[0, 2],[1, 0]])
linalg.eig(test)[0]
but using
import mpmath
from mpmath import *
mp.dps = 50
from numpy import *
test=array([[0, 2],[1, 0]])
linalg.eig(test)[0]
does not help
>
> Eigenvalues are roots of a polynomial in the elements and root solving may
> require an infinite number of steps so it will be difficult with arbitrary
> matrices to keep arbitrary precision.
> --
> Robin Becker
More information about the Python-list
mailing list