
hi all. is there a function in numpy to compute the exp of a matrix, similar to expm in matlab? for example: expm([[0,0],[0,0]]) = eye(2) thanks, lorenzo.

lorenzo bolla wrote:
hi all. is there a function in numpy to compute the exp of a matrix, similar to expm in matlab? for example: expm([[0,0],[0,0]]) = eye(2)
thanks, lorenzo. ------------------------------------------------------------------------
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Numpy doesn't provide expm but scipy does.
from scipy.linalg import expm, expm2, expm3
Nils

On 20/07/07, Nils Wagner <nwagner@iam.uni-stuttgart.de> wrote:
lorenzo bolla wrote:
hi all. is there a function in numpy to compute the exp of a matrix, similar to expm in matlab? for example: expm([[0,0],[0,0]]) = eye(2) Numpy doesn't provide expm but scipy does.
from scipy.linalg import expm, expm2, expm3
Just as a warning, numpy does provide expm1, but it does something different (exp(x)-1, computed directly). Anne

On 7/20/07, Anne Archibald <peridot.faceted@gmail.com> wrote:
On 20/07/07, Nils Wagner <nwagner@iam.uni-stuttgart.de> wrote:
lorenzo bolla wrote:
hi all. is there a function in numpy to compute the exp of a matrix, similar to expm in matlab? for example: expm([[0,0],[0,0]]) = eye(2) Numpy doesn't provide expm but scipy does.
from scipy.linalg import expm, expm2, expm3
Just as a warning, numpy does provide expm1, but it does something different (exp(x)-1, computed directly).
On a separate note, I'm working to provide faster and more accurate versions of sqrtm and expm. The current versions do not take full advantage of LAPACK. Here are some preliminary benchmarks: Ill-conditioned ---------------- linalg.sqrtm : error=9.37e-27, 573.38 usec/pass sqrtm_svd : error=2.16e-28, 142.38 usec/pass sqrtm_eig : error=4.79e-27, 270.38 usec/pass sqrtm_symmetric: error=1.04e-27, 239.30 usec/pass sqrtm_symmetric2: error=2.73e-27, 190.03 usec/pass Well-conditioned ---------------- linalg.sqrtm : error=1.83e-29, 478.67 usec/pass sqrtm_svd : error=8.11e-30, 130.57 usec/pass sqrtm_eig : error=4.50e-30, 255.56 usec/pass sqrtm_symmetric: error=2.78e-30, 237.61 usec/pass sqrtm_symmetric2: error=3.35e-30, 167.27 usec/pass Large ---------------- linalg.sqrtm : error=5.95e-25, 8450081.68 usec/pass sqrtm_svd : error=1.64e-24, 151206.61 usec/pass sqrtm_eig : error=6.31e-24, 549837.40 usec/pass sqrtm_symmetric: error=8.55e-25, 177422.29 usec/pass where: def sqrtm_svd(x): u,s,vt = linalg.svd(x) return dot(u,transpose((s**0.5)*transpose(vt))) def sqrtm_eig(x): d,e = linalg.eig(x) d = (d**0.5).astype(float) return dot(e,transpose(d*e)) def sqrtm_symmetric(x,cond=1e-7): d,e = linalg.eigh(x) d[d<cond] = 0 return dot(e,transpose((d**0.5)*e)).astype(float) def sqrtm_symmetric2(x): # Not as robust due to initial Cholesky step l=linalg.cholesky(x,lower=1) u,s,vt = linalg.svd(l) return dot(u,transpose(s*u)) with SciPy linked against ACML. -Kevin

On Fri, 20 Jul 2007 13:03:09 -0400 "Kevin Jacobs <jacobs@bioinformed.com>" <bioinformed@gmail.com> wrote:
On 7/20/07, Anne Archibald <peridot.faceted@gmail.com> wrote:
On 20/07/07, Nils Wagner <nwagner@iam.uni-stuttgart.de> wrote:
lorenzo bolla wrote:
hi all. is there a function in numpy to compute the exp of a matrix, similar to expm in matlab? for example: expm([[0,0],[0,0]]) = eye(2) Numpy doesn't provide expm but scipy does.
from scipy.linalg import expm, expm2, expm3
Just as a warning, numpy does provide expm1, but it does something different (exp(x)-1, computed directly).
On a separate note, I'm working to provide faster and more accurate versions of sqrtm and expm. The current versions do not take full advantage of LAPACK. Here are some preliminary benchmarks:
Ill-conditioned ---------------- linalg.sqrtm : error=9.37e-27, 573.38 usec/pass sqrtm_svd : error=2.16e-28, 142.38 usec/pass sqrtm_eig : error=4.79e-27, 270.38 usec/pass sqrtm_symmetric: error=1.04e-27, 239.30 usec/pass sqrtm_symmetric2: error=2.73e-27, 190.03 usec/pass
Well-conditioned ---------------- linalg.sqrtm : error=1.83e-29, 478.67 usec/pass sqrtm_svd : error=8.11e-30, 130.57 usec/pass sqrtm_eig : error=4.50e-30, 255.56 usec/pass sqrtm_symmetric: error=2.78e-30, 237.61 usec/pass sqrtm_symmetric2: error=3.35e-30, 167.27 usec/pass
Large ---------------- linalg.sqrtm : error=5.95e-25, 8450081.68 usec/pass sqrtm_svd : error=1.64e-24, 151206.61 usec/pass sqrtm_eig : error=6.31e-24, 549837.40 usec/pass sqrtm_symmetric: error=8.55e-25, 177422.29 usec/pass
where:
def sqrtm_svd(x): u,s,vt = linalg.svd(x) return dot(u,transpose((s**0.5)*transpose(vt)))
def sqrtm_eig(x): d,e = linalg.eig(x) d = (d**0.5).astype(float) return dot(e,transpose(d*e))
def sqrtm_symmetric(x,cond=1e-7): d,e = linalg.eigh(x) d[d<cond] = 0 return dot(e,transpose((d**0.5)*e)).astype(float)
def sqrtm_symmetric2(x): # Not as robust due to initial Cholesky step l=linalg.cholesky(x,lower=1) u,s,vt = linalg.svd(l) return dot(u,transpose(s*u))
with SciPy linked against ACML.
-Kevin
Kevin, Your sqrtm_eig(x) function won't work if x is defective. See test_defective.py for details. Have you considered the algorithms proposed by Nick Higham for various matrix functions ? http://www.maths.manchester.ac.uk/~higham/pap-mf.html Nils

On 7/20/07, Nils Wagner <nwagner@iam.uni-stuttgart.de> wrote:
Your sqrtm_eig(x) function won't work if x is defective. See test_defective.py for details.
I am aware, though at least on my system, the SVD-based method is by far the fastest and robust (and can be made more robust by the addition of a relative condition number threshold). The eig version was included mainly for comparison. Have you considered the algorithms proposed by
Nick Higham for various matrix functions ?
Yep. He is one of my heros. The downside is that direct Python implementations of many of his algorithms will almost always be significantly slower than using algorithms that leave the heavy-lifting to LAPACK. This is certainly the case for the current sqrtm and expm code. Even in the Python domain, there is significant room to optimize the current sqrtm implementation, since one of the key inner loops can be trivially vectorized. I'll certainly give that a shot and add it to my next round of benchmarks. However, I'm not sure that I want to commit to going to the next level by developing and maintaining a C implementation of sqrtm. While it has the potential to achieve near-optimal performance for that method, we may be reaching the point of diminishing returns for my needs. I certainly won't complain if someone else is willing to do so. Thanks, -Kevin

On Fri, 20 Jul 2007 14:45:43 -0400 "Kevin Jacobs <jacobs@bioinformed.com>" <bioinformed@gmail.com> wrote:
On 7/20/07, Nils Wagner <nwagner@iam.uni-stuttgart.de> wrote:
Your sqrtm_eig(x) function won't work if x is defective. See test_defective.py for details.
I am aware, though at least on my system, the SVD-based method is by far the fastest and robust (and can be made more robust by the addition of a relative condition number threshold).
Your sqrtm_svd(x) method also returns a wrong result, if x is defective. Cheers, Nils The eig version
was included mainly for comparison.

On 7/20/07, Nils Wagner <nwagner@iam.uni-stuttgart.de> wrote:
Your sqrtm_eig(x) function won't work if x is defective. See test_defective.py for details.
I've added several defective matrices to my test cases and the SVD method doesn't work well as I'd thought (which is obvious in retrospect). I'm going to circle back and see what I can do to speed up Nick Higham's algorithm to the point where it is useful for me. Otherwise, my need is for a fast sqrtm for symmetric positive definite inputs, so I may still end up using the SVD approach and contributing a complementary sqrtm_nondefective back to scipy. -Kevin

On 7/20/07, Kevin Jacobs <jacobs@bioinformed.com> <bioinformed@gmail.com> wrote:
On 7/20/07, Anne Archibald <peridot.faceted@gmail.com> wrote:
On 20/07/07, Nils Wagner <nwagner@iam.uni-stuttgart.de> wrote:
lorenzo bolla wrote:
hi all. is there a function in numpy to compute the exp of a matrix, similar
to expm in matlab? for example: expm([[0,0],[0,0]]) = eye(2) Numpy doesn't provide expm but scipy does.
from scipy.linalg import expm, expm2, expm3
Just as a warning, numpy does provide expm1, but it does something different (exp(x)-1, computed directly).
On a separate note, I'm working to provide faster and more accurate versions of sqrtm and expm. The current versions do not take full advantage of LAPACK. Here are some preliminary benchmarks:
Ill-conditioned ---------------- linalg.sqrtm : error=9.37e-27, 573.38 usec/pass sqrtm_svd : error=2.16e-28, 142.38 usec/pass sqrtm_eig : error=4.79e-27, 270.38 usec/pass sqrtm_symmetric: error= 1.04e-27, 239.30 usec/pass sqrtm_symmetric2: error=2.73e-27, 190.03 usec/pass
Well-conditioned ---------------- linalg.sqrtm : error=1.83e-29, 478.67 usec/pass sqrtm_svd : error=8.11e-30, 130.57 usec/pass sqrtm_eig : error=4.50e-30, 255.56 usec/pass sqrtm_symmetric: error=2.78e-30, 237.61 usec/pass sqrtm_symmetric2: error=3.35e-30, 167.27 usec/pass
Large ---------------- linalg.sqrtm : error=5.95e-25 , 8450081.68 usec/pass sqrtm_svd : error=1.64e-24, 151206.61 usec/pass sqrtm_eig : error=6.31e-24, 549837.40 usec/pass sqrtm_symmetric: error=8.55e-25, 177422.29 usec/pass
where:
def sqrtm_svd(x): u,s,vt = linalg.svd(x) return dot(u,transpose((s**0.5)*transpose(vt)))
def sqrtm_eig(x): d,e = linalg.eig(x) d = (d**0.5).astype(float) return dot(e,transpose(d*e))
def sqrtm_symmetric(x,cond=1e-7): d,e = linalg.eigh(x) d[d<cond] = 0 return dot(e,transpose((d**0.5)*e)).astype(float)
def sqrtm_symmetric2(x): # Not as robust due to initial Cholesky step l=linalg.cholesky(x,lower=1) u,s,vt = linalg.svd(l) return dot(u,transpose(s*u))
with SciPy linked against ACML.
I expect using sqrt(x) will be faster than x**.5. Chuck

On 7/20/07, Charles R Harris <charlesr.harris@gmail.com> wrote: [SNIP]
I expect using sqrt(x) will be faster than x**.5.
You might want to check that. I believe that x**0.5 is one of the magic special cases that is optimized to run fast (by calling sqrt in this case). IIRC the full set is [-1, 0, 0.5, 1, 2]. These were all of the ones that we believed could be implemented without loosing accuracy relative to using pow. -- . __ . |-\ . . tim.hochberg@ieee.org

On 7/20/07, Charles R Harris <charlesr.harris@gmail.com> wrote:
I expect using sqrt(x) will be faster than x**.5.
I did test this at one point and was also surprised that sqrt(x) seemed slower than **.5. However I found out otherwise while preparing a timeit script to demonstrate this observation. Unfortunately, I didn't save the precise script I used to explore this issue the first time. On my system for arrays with more than 2 elements, sqrt is indeed faster. For smaller arrays, the different is negligible, but inches out in favor of **0.5. Thanks, -Kevin

On 7/20/07, Kevin Jacobs <jacobs@bioinformed.com> <bioinformed@gmail.com> wrote:
On 7/20/07, Charles R Harris <charlesr.harris@gmail.com> wrote:
I expect using sqrt(x) will be faster than x**.5.
I did test this at one point and was also surprised that sqrt(x) seemed slower than **.5. However I found out otherwise while preparing a timeit script to demonstrate this observation. Unfortunately, I didn't save the precise script I used to explore this issue the first time. On my system for arrays with more than 2 elements, sqrt is indeed faster. For smaller arrays, the different is negligible, but inches out in favor of ** 0.5.
This is just not my day. My observations above are valid for integer arrays, but not float arrays: sqrt(int array) : 6.98 usec/pass (int array)**0.5 : 22.75 usec/pass sqrt(float array) : 6.70 usec/pass (float array)**0.5: 4.66 usec/pass Generated by: import timeit n=100000 t=timeit.Timer(stmt='sqrt(arange(3))',setup='from numpy import arange,array,sqrt\nx=arange(100)') print 'sqrt(int array) : %5.2f usec/pass' % (1000000*t.timeit(number=n)/n) t=timeit.Timer(stmt='x**0.5',setup='from numpy import arange,array\nx=arange(100)') print '(int array)**0.5 : %5.2f usec/pass' % (1000000*t.timeit(number=n)/n) t=timeit.Timer(stmt='sqrt(arange(3))',setup='from numpy import arange,array,sqrt\nx=arange(100,dtype=float)') print 'sqrt(float array) : %5.2f usec/pass' % (1000000*t.timeit(number=n)/n) t=timeit.Timer(stmt='x**0.5',setup='from numpy import arange,array\nx=arange(100,dtype=float)') print '(float array)**0.5: %5.2f usec/pass' % (1000000*t.timeit(number=n)/n) -Kevin

On 7/20/07, Kevin Jacobs <jacobs@bioinformed.com> <bioinformed@gmail.com> wrote:
On 7/20/07, Kevin Jacobs <jacobs@bioinformed.com> <bioinformed@gmail.com> wrote:
On 7/20/07, Charles R Harris < charlesr.harris@gmail.com> wrote:
I expect using sqrt(x) will be faster than x**.5.
I did test this at one point and was also surprised that sqrt(x) seemed slower than **.5. However I found out otherwise while preparing a timeit script to demonstrate this observation. Unfortunately, I didn't save the precise script I used to explore this issue the first time. On my system for arrays with more than 2 elements, sqrt is indeed faster. For smaller arrays, the different is negligible, but inches out in favor of ** 0.5.
This is just not my day. My observations above are valid for integer arrays, but not float arrays:
sqrt(int array) : 6.98 usec/pass (int array)**0.5 : 22.75 usec/pass sqrt(float array) : 6.70 usec/pass (float array)**0.5: 4.66 usec/pass
From the source, it appears that powers [-1, 0, 0.5, 1, 2] are optimized for float and complex types, while one power, 2, is optimized for other types. I can't recall why that is however.
Generated by:
import timeit
n=100000
t=timeit.Timer(stmt='sqrt(arange(3))',setup='from numpy import arange,array,sqrt\nx=arange(100)') print 'sqrt(int array) : %5.2f usec/pass' % (1000000*t.timeit (number=n)/n)
t=timeit.Timer(stmt='x**0.5',setup='from numpy import arange,array\nx=arange(100)') print '(int array)** 0.5 : %5.2f usec/pass' % (1000000*t.timeit (number=n)/n)
t=timeit.Timer(stmt='sqrt(arange(3))',setup='from numpy import arange,array,sqrt\nx=arange(100,dtype=float)') print 'sqrt(float array) : %5.2f usec/pass' % (1000000* t.timeit (number=n)/n)
t=timeit.Timer(stmt='x**0.5',setup='from numpy import arange,array\nx=arange(100,dtype=float)') print '(float array)**0.5: %5.2f usec/pass' % (1000000*t.timeit(number=n)/n)
-Kevin
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
-- . __ . |-\ . . tim.hochberg@ieee.org
participants (6)
-
Anne Archibald
-
Charles R Harris
-
Kevin Jacobs <jacobs@bioinformed.com>
-
lorenzo bolla
-
Nils Wagner
-
Timothy Hochberg