scipy.sparse versus pysparse
Hi, I am doing some testing between scipy.sparse and pysparse on my ubuntu machine. Some testing reveals that pysparse is about 9 times faster in matrix-vector multiplication that scipy.sparse. Might there be anything specific I forgot to do during scipy's installation (I just ran apt-get install python-scipy)? Is there another simple explanation for this difference? I prefer to use scipy.sparse for its cleaner api, but a factor 9 in speed is considerable. thanks Nicky
Hey, On Wed, Jul 23, 2014 at 4:40 PM, nicky van foreest <vanforeest@gmail.com> wrote:
I am doing some testing between scipy.sparse and pysparse on my ubuntu machine. Some testing reveals that pysparse is about 9 times faster in matrix-vector multiplication that scipy.sparse. Might there be anything specific I forgot to do during scipy's installation (I just ran apt-get install python-scipy)? Is there another simple explanation for this difference? I prefer to use scipy.sparse for its cleaner api, but a factor 9 in speed is considerable.
Could you post your benchmarking code somewhere (or show here), please? Cheers, Moritz
Hi Moritz, Sure. Please see below. I included an extra time stamp to analyse the results in slightly more detail. It turns out that the matrix-vector multiplications are roughly the same in scipy.stats and pysparse, but that building the matrices in pysparse is way faster. I compute the stationary distribution vector of a Markov chain. The details of the algo are not really important. I guess that the logic of the code is easy to understand, if not, please let me know. There are three nearly identical methods to compute the distribution vector, the first uses scipy.stats and the * operator to compute a vector times a matrix, the second uses scipy.sparse dot(), and the third uses pysparse. You might want to skip the code and jump right away to results which I include below the code ==== Code from numpy import ones, zeros, empty import scipy.sparse as sp import pysparse from pylab import matshow, savefig from scipy.linalg import norm import time labda, mu1, mu2 = 1., 1.1, 1.01 N1, N2 = 400, 400 size = N1*N2 eps = 1e-3 maxIterations = 1e5 print "size = ", size def state(i,j): return j*N1 + i def fillOffDiagonal(Q): # labda for i in range(0,N1-1): for j in range(0,N2): Q[(state(i,j),state(i+1,j))]= labda # mu2 for i in range(0,N1): for j in range(1,N2): Q[(state(i,j),state(i,j-1))]= mu2 # mu1 for i in range(1,N1): for j in range(0,N2-1): Q[(state(i,j),state(i-1,j+1))]= mu1 #print "ready filling" def computePiMethod1(): """ based on scipy.sparse, naive matrix-vector multiplication """ e0 = time.time() Q = sp.dok_matrix((size,size)) fillOffDiagonal(Q) # Set the diagonal of Q such that the row sums are zero Q.setdiag( -Q*ones(size) ) # Compute a suitable stochastic matrix by means of uniformization l = min(Q.values())*1.001 # avoid periodicity, see trivedi's book P = sp.eye(size, size) - Q/l P = P.tocsr() pi = zeros(size); pi1 = zeros(size) pi[0] = 1; n = norm(pi - pi1,1); i = 0; e1 = time.time() while n > eps and i < maxIterations: pi1 = pi*P pi = pi1*P # avoid copying pi1 to pi n = norm(pi - pi1,1); i += 1 print "Method 1: ", e1-e0, time.time() - e1, i return pi def computePiMethod2(): """ based on scipy.sparse, dot multiplication """ e0 = time.time() Q = sp.dok_matrix((size,size)) fillOffDiagonal(Q) # Set the diagonal of Q such that the row sums are zero Q.setdiag( -Q*ones(size) ) # Compute a suitable stochastic matrix by means of uniformization l = min(Q.values())*1.001 # avoid periodicity, see trivedi's book P = sp.eye(size, size) - Q/l P = P.transpose() P = P.tocsr() pi = zeros(size); pi1 = zeros(size) pi[0] = 1; n = norm(pi - pi1,1); i = 0; e1 = time.time() while n > eps and i < maxIterations: pi1 = P.dot(pi) pi = P.dot(pi1) n = norm(pi - pi1,1); i += 1 print "Method 2: ", e1-e0, time.time() - e1, i return pi def computePiMethod3(): """ based on pysparse """ e0 = time.time() Q = pysparse.spmatrix.ll_mat(size,size) fillOffDiagonal(Q) # fill diagonal x = empty(size) Q.matvec(ones(size),x) Q.put(-x) # uniformize l = min(Q.values())*1.001 P = pysparse.spmatrix.ll_mat(size,size) P.put(ones(size)) P.shift(-1./l, Q) # Compute pi P = P.to_csr() pi = zeros(size); pi1 = zeros(size) pi[0] = 1; n = norm(pi - pi1,1); i = 0; e1 = time.time() while n > eps and i < maxIterations: P.matvec_transp(pi,pi1) P.matvec_transp(pi1,pi) n = norm(pi - pi1,1); i += 1 print "Method 3: ", e1-e0, time.time() - e1, i return pi def plotPi(pi): pi = pi.reshape(N2,N1) matshow(pi) savefig("pi.png") if __name__ == "__main__": pi1 = computePiMethod1() pi2 = computePiMethod2() pi3 = computePiMethod3() d1 = norm(pi1-pi2,1) d2 = norm(pi1-pi3,1) print d1, d2from numpy import ones, zeros, empty import scipy.sparse as sp import pysparse from pylab import matshow, savefig from scipy.linalg import norm import time labda, mu1, mu2 = 1., 1.1, 1.01 N1, N2 = 400, 400 size = N1*N2 eps = 1e-3 maxIterations = 1e5 print "size = ", size def state(i,j): return j*N1 + i def fillOffDiagonal(Q): # labda for i in range(0,N1-1): for j in range(0,N2): Q[(state(i,j),state(i+1,j))]= labda # mu2 for i in range(0,N1): for j in range(1,N2): Q[(state(i,j),state(i,j-1))]= mu2 # mu1 for i in range(1,N1): for j in range(0,N2-1): Q[(state(i,j),state(i-1,j+1))]= mu1 #print "ready filling" def computePiMethod1(): """ based on scipy.sparse, naive matrix-vector multiplication """ e0 = time.time() Q = sp.dok_matrix((size,size)) fillOffDiagonal(Q) # Set the diagonal of Q such that the row sums are zero Q.setdiag( -Q*ones(size) ) # Compute a suitable stochastic matrix by means of uniformization l = min(Q.values())*1.001 # avoid periodicity, see trivedi's book P = sp.eye(size, size) - Q/l P = P.tocsr() pi = zeros(size); pi1 = zeros(size) pi[0] = 1; n = norm(pi - pi1,1); i = 0; e1 = time.time() while n > eps and i < maxIterations: pi1 = pi*P pi = pi1*P # avoid copying pi1 to pi n = norm(pi - pi1,1); i += 1 print "Method 1: ", e1-e0, time.time() - e1, i return pi def computePiMethod2(): """ based on scipy.sparse, dot multiplication """ e0 = time.time() Q = sp.dok_matrix((size,size)) fillOffDiagonal(Q) # Set the diagonal of Q such that the row sums are zero Q.setdiag( -Q*ones(size) ) # Compute a suitable stochastic matrix by means of uniformization l = min(Q.values())*1.001 # avoid periodicity, see trivedi's book P = sp.eye(size, size) - Q/l P = P.transpose() P = P.tocsr() pi = zeros(size); pi1 = zeros(size) pi[0] = 1; n = norm(pi - pi1,1); i = 0; e1 = time.time() while n > eps and i < maxIterations: pi1 = P.dot(pi) pi = P.dot(pi1) n = norm(pi - pi1,1); i += 1 print "Method 2: ", e1-e0, time.time() - e1, i return pi def computePiMethod3(): """ based on pysparse """ e0 = time.time() Q = pysparse.spmatrix.ll_mat(size,size) fillOffDiagonal(Q) # fill diagonal x = empty(size) Q.matvec(ones(size),x) Q.put(-x) # uniformize l = min(Q.values())*1.001 P = pysparse.spmatrix.ll_mat(size,size) P.put(ones(size)) P.shift(-1./l, Q) # Compute pi P = P.to_csr() pi = zeros(size); pi1 = zeros(size) pi[0] = 1; n = norm(pi - pi1,1); i = 0; e1 = time.time() while n > eps and i < maxIterations: P.matvec_transp(pi,pi1) P.matvec_transp(pi1,pi) n = norm(pi - pi1,1); i += 1 print "Method 3: ", e1-e0, time.time() - e1, i return pi def plotPi(pi): pi = pi.reshape(N2,N1) matshow(pi) savefig("pi.png") if __name__ == "__main__": pi1 = computePiMethod1() pi2 = computePiMethod2() pi3 = computePiMethod3() d1 = norm(pi1-pi2,1) d2 = norm(pi1-pi3,1) print d1, d2 ============================== Results nicky@chuck:~/myprogs/python/queueing/tandemQueueMDP$ python tandemqueue.py size = 40000 Method 1: 4.31593680382 0.387089014053 285 Method 2: 4.27599096298 0.273495912552 285 Method 3: 0.0856800079346 0.267058134079 285 0.0 5.05082123061e-15 The first number after "method1:" represents the time it takes to fill the matrix, the second number is the time involved to carry out the multiplications, and the third is (twice) the number of mutliplications involved. The second number is (nearly) the same for all three methods, hence the multiplication time is about the same. There is, however, a huge difference in the first number, ie, the time required to build the matrix. It takes about 4 sec for scipy.stats, and 8e-2 sec for pysparse. The last row prints the difference between the results (the stationary distributions vectors) as obtained by all three methods. Luckily the results are the same, up to rounding. Here is a try with a bigger matrix: nicky@chuck:~/myprogs/python/queueing/tandemQueueMDP$ python tandemqueue.py size = 160000 Method 1: 17.4650111198 1.80849194527 285 Method 2: 17.5270321369 1.54912996292 285 Method 3: 0.382800102234 1.63900899887 285 0.0 5.87925665394e-15 Again the same result. Do you perhaps have any explanation for this? Thanks Nicky On 23 July 2014 16:51, Moritz Beber <moritz.beber@gmail.com> wrote:
Hey,
On Wed, Jul 23, 2014 at 4:40 PM, nicky van foreest <vanforeest@gmail.com> wrote:
I am doing some testing between scipy.sparse and pysparse on my ubuntu machine. Some testing reveals that pysparse is about 9 times faster in matrix-vector multiplication that scipy.sparse. Might there be anything specific I forgot to do during scipy's installation (I just ran apt-get install python-scipy)? Is there another simple explanation for this difference? I prefer to use scipy.sparse for its cleaner api, but a factor 9 in speed is considerable.
Could you post your benchmarking code somewhere (or show here), please?
Cheers, Moritz
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
23.07.2014, 23:08, nicky van foreest kirjoitti:
Sure. Please see below. I included an extra time stamp to analyse the results in slightly more detail. It turns out that the matrix-vector multiplications are roughly the same in scipy.stats and pysparse, but that building the matrices in pysparse is way faster.
The benchmark is mainly measuring the speed of dok_matrix.__setitem__ for scalars (dok_matrix.setdiag is naive and justs sets items in a for loop). Neither dok_matrix or lil_matrix is very fast. This is largely limited by the fact that they use Python dict and Python lists as data structures, which have non-negligible overheads. lil_matrix was optimized in Scipy 0.14.0, so you may get better results using it (for those Scipy versions). Additionally, vectorized assignment into sparse matrices is now supported, so further performance improvement can be obtained by replacing the for loops in fillOffDiagonal. There may be some room for optimization in dok_matrix for scalar assignment, but this is probably not more than 2x. The remaining 10x factor vs. pysparse requires pretty much not using Python data structures for storing the numbers. csr, csr, bsr, and dia are OK, but the data structures are not well-suited for matrix assembly. -- Pauli Virtanen
Hi Pauli, Thanks for your clarifications. NIcky On 23 July 2014 23:09, Pauli Virtanen <pav@iki.fi> wrote:
23.07.2014, 23:08, nicky van foreest kirjoitti:
Sure. Please see below. I included an extra time stamp to analyse the results in slightly more detail. It turns out that the matrix-vector multiplications are roughly the same in scipy.stats and pysparse, but that building the matrices in pysparse is way faster.
The benchmark is mainly measuring the speed of dok_matrix.__setitem__ for scalars (dok_matrix.setdiag is naive and justs sets items in a for loop).
Neither dok_matrix or lil_matrix is very fast. This is largely limited by the fact that they use Python dict and Python lists as data structures, which have non-negligible overheads.
lil_matrix was optimized in Scipy 0.14.0, so you may get better results using it (for those Scipy versions). Additionally, vectorized assignment into sparse matrices is now supported, so further performance improvement can be obtained by replacing the for loops in fillOffDiagonal.
There may be some room for optimization in dok_matrix for scalar assignment, but this is probably not more than 2x. The remaining 10x factor vs. pysparse requires pretty much not using Python data structures for storing the numbers.
csr, csr, bsr, and dia are OK, but the data structures are not well-suited for matrix assembly.
-- Pauli Virtanen
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
24.07.2014, 11:11, nicky van foreest kirjoitti:
Thanks for your clarifications.
I should note that the issue of adding a sparse format more suitable for fast matrix assembly has been brought up, but not implemented yet. While the fact that lil_matrix and dok_matrix are Python data structures is nice, a more practical approach would use opaque data storage (similar to ll_mat in pysparse). Pauli
That would be great. I'll check the mailing list to see when it comes along :-) On 24 July 2014 15:13, Pauli Virtanen <pav@iki.fi> wrote:
24.07.2014, 11:11, nicky van foreest kirjoitti:
Thanks for your clarifications.
I should note that the issue of adding a sparse format more suitable for fast matrix assembly has been brought up, but not implemented yet.
While the fact that lil_matrix and dok_matrix are Python data structures is nice, a more practical approach would use opaque data storage (similar to ll_mat in pysparse).
Pauli
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
On Thu, Jul 24, 2014 at 2:49 PM, nicky van foreest <vanforeest@gmail.com> wrote:
That would be great. I'll check the mailing list to see when it comes along :-)
After profiling the code I see that much of the time is spent doing numerical things with the DOK data, like negating or scaling each value. This is slow for DOK but faster for other formats like COO. Although you can't set COO matrix entries after having built the matrix, implementing setdiag seemed enough to work for your case, although possibly losing the parts of the API that you prefer over that of pysparse. I took the liberty to rewrite some of the code exploring some other scipy.sparse.linalg algorithms; I too run across problems like this where I need the equilibrium distribution for a sparse instantaneous transition rate matrix with a combinatorially large state space. For your particular example the shift-invert arpack function that uses superlu for sparse decomposition seems unreasonably effective. Cheers, Alex --- from __future__ import print_function from numpy import ones, zeros, array import scipy.sparse as sp from scipy.sparse.linalg import eigs from pylab import matshow, savefig from scipy.linalg import norm import time labda, mu1, mu2 = 1., 1.1, 1.01 N1, N2 = 400, 400 size = N1*N2 eps = 1e-3 maxIterations = int(1e5) guess = array([1] + [0] * (size-1)) def state(i,j): return j*N1 + i def gen_rate_triples(): for i in range(0,N1-1): for j in range(0,N2): yield state(i, j), state(i+1, j), labda for i in range(0,N1): for j in range(1,N2): yield state(i, j), state(i, j-1), mu2 for i in range(1,N1): for j in range(0,N2-1): yield state(i, j), state(i-1, j+1), mu1 def get_rate_info(triples): rows, cols, rates = zip(*triples) pre_Q = sp.coo_matrix((rates, (rows, cols)), shape=(size, size)) exit_rates = pre_Q.dot(ones(size)) return pre_Q, exit_rates def get_PT(pre_Q, exit_rates): urate = exit_rates.max() * 1.001 P = pre_Q / urate P.setdiag(1 - exit_rates / urate) return P.T.tocsr() def get_QT(pre_Q, exit_rates): Q = pre_Q.copy() Q.setdiag(-exit_rates) return Q.T.tocsr() def QTpi(QT, guess, tol): w, v = eigs(QT, k=1, v0=guess, sigma=1e-6, which='LM', tol=tol, maxiter=maxIterations) pi = v[:, 0].real return pi / pi.sum() def PTpi(PT, guess, tol): w, v = eigs(PT, k=1, v0=guess, tol=eps, maxiter=maxIterations) pi = v[:, 0].real return pi / pi.sum() def power_method(PT, guess, abstol): p_prev = zeros(size) p = guess.copy() for i in range(maxIterations): if norm(p - p_prev, ord=1) < abstol: break p_prev = PT.dot(p) p = PT.dot(p_prev) return p print('settings:') print('labda:', labda) print('mu1:', mu1) print('mu2:', mu2) print('N1:', N1) print('N2:', N2) print() print('precalculation times:') tm = time.time() pre_Q, exit_rates = get_rate_info(gen_rate_triples()) print(time.time() - tm, 'for some rate info') tm = time.time() QT = get_QT(pre_Q, exit_rates) print(time.time() - tm, 'to make the transition rate matrix') tm = time.time() PT = get_PT(pre_Q, exit_rates) print(time.time() - tm, 'to make the uniformized trans prob matrix') print() # use an iterative method tm = time.time() pi_P = PTpi(PT, guess, eps) tm_P = time.time() - tm # use superlu and arpack tm = time.time() pi_Q = QTpi(QT, guess, eps) tm_Q = time.time() - tm # use a power method tm = time.time() pi_R = power_method(PT, guess, eps) tm_R = time.time() - tm # make pngs matshow(pi_P.reshape(N2,N1)); savefig("pi_P.png") matshow(pi_Q.reshape(N2,N1)); savefig("pi_Q.png") matshow(pi_R.reshape(N2,N1)); savefig("pi_R.png") print('distribution estimates:') print('P:', pi_P) print('Q:', pi_Q) print('R:', pi_R) print() print('computation times for the iterations:') print('P:', tm_P) print('Q:', tm_Q) print('R:', tm_R) print() print('violation of the invariant pi * P = pi:') print('P:', norm(PT*pi_P - pi_P, ord=1)) print('Q:', norm(PT*pi_Q - pi_Q, ord=1)) print('R:', norm(PT*pi_R - pi_R, ord=1)) print() print('violation of the invariant pi * Q = 0:') print('P:', norm(QT*pi_P, ord=1)) print('Q:', norm(QT*pi_Q, ord=1)) print('R:', norm(QT*pi_R, ord=1)) print() print('pngs:') print('P: pi_P.png') print('Q: pi_Q.png') print('R: pi_R.png') print() --- settings: labda: 1.0 mu1: 1.1 mu2: 1.01 N1: 400 N2: 400 precalculation times: 0.753726005554 for some rate info 0.045077085495 to make the transition rate matrix 0.0467808246613 to make the uniformized trans prob matrix distribution estimates: P: [ 0.00501086 0.00455523 0.00414086 ..., -0. -0. -0. ] Q: [ 9.00503398e-04 8.18606355e-04 7.44190344e-04 ..., 3.62901458e-07 3.59308342e-07 3.55750817e-07] R: [ 0.0053192 0.00483551 0.00439555 ..., 0. 0. 0. ] computation times for the iterations: P: 2.88090801239 Q: 4.26303100586 R: 0.834770202637 violation of the invariant pi * P = pi: P: 0.0016872009599 Q: 2.74708912485e-08 R: 0.000994144099758 violation of the invariant pi * Q = 0: P: 0.00525244218029 Q: 8.55199061005e-08 R: 0.0030948799384 pngs: P: pi_P.png Q: pi_Q.png R: pi_R.png
HI Alex, Thanks for your input. I'll try to run it and get back to you. Nicky On 25 July 2014 04:47, alex <argriffi@ncsu.edu> wrote:
On Thu, Jul 24, 2014 at 2:49 PM, nicky van foreest <vanforeest@gmail.com> wrote:
That would be great. I'll check the mailing list to see when it comes along :-)
After profiling the code I see that much of the time is spent doing numerical things with the DOK data, like negating or scaling each value. This is slow for DOK but faster for other formats like COO. Although you can't set COO matrix entries after having built the matrix, implementing setdiag seemed enough to work for your case, although possibly losing the parts of the API that you prefer over that of pysparse.
I took the liberty to rewrite some of the code exploring some other scipy.sparse.linalg algorithms; I too run across problems like this where I need the equilibrium distribution for a sparse instantaneous transition rate matrix with a combinatorially large state space. For your particular example the shift-invert arpack function that uses superlu for sparse decomposition seems unreasonably effective.
Cheers, Alex
---
from __future__ import print_function from numpy import ones, zeros, array import scipy.sparse as sp from scipy.sparse.linalg import eigs from pylab import matshow, savefig from scipy.linalg import norm import time
labda, mu1, mu2 = 1., 1.1, 1.01 N1, N2 = 400, 400 size = N1*N2 eps = 1e-3 maxIterations = int(1e5) guess = array([1] + [0] * (size-1))
def state(i,j): return j*N1 + i
def gen_rate_triples(): for i in range(0,N1-1): for j in range(0,N2): yield state(i, j), state(i+1, j), labda for i in range(0,N1): for j in range(1,N2): yield state(i, j), state(i, j-1), mu2 for i in range(1,N1): for j in range(0,N2-1): yield state(i, j), state(i-1, j+1), mu1
def get_rate_info(triples): rows, cols, rates = zip(*triples) pre_Q = sp.coo_matrix((rates, (rows, cols)), shape=(size, size)) exit_rates = pre_Q.dot(ones(size)) return pre_Q, exit_rates
def get_PT(pre_Q, exit_rates): urate = exit_rates.max() * 1.001 P = pre_Q / urate P.setdiag(1 - exit_rates / urate) return P.T.tocsr()
def get_QT(pre_Q, exit_rates): Q = pre_Q.copy() Q.setdiag(-exit_rates) return Q.T.tocsr()
def QTpi(QT, guess, tol): w, v = eigs(QT, k=1, v0=guess, sigma=1e-6, which='LM', tol=tol, maxiter=maxIterations) pi = v[:, 0].real return pi / pi.sum()
def PTpi(PT, guess, tol): w, v = eigs(PT, k=1, v0=guess, tol=eps, maxiter=maxIterations) pi = v[:, 0].real return pi / pi.sum()
def power_method(PT, guess, abstol): p_prev = zeros(size) p = guess.copy() for i in range(maxIterations): if norm(p - p_prev, ord=1) < abstol: break p_prev = PT.dot(p) p = PT.dot(p_prev) return p
print('settings:') print('labda:', labda) print('mu1:', mu1) print('mu2:', mu2) print('N1:', N1) print('N2:', N2) print()
print('precalculation times:') tm = time.time() pre_Q, exit_rates = get_rate_info(gen_rate_triples()) print(time.time() - tm, 'for some rate info') tm = time.time() QT = get_QT(pre_Q, exit_rates) print(time.time() - tm, 'to make the transition rate matrix') tm = time.time() PT = get_PT(pre_Q, exit_rates) print(time.time() - tm, 'to make the uniformized trans prob matrix') print()
# use an iterative method tm = time.time() pi_P = PTpi(PT, guess, eps) tm_P = time.time() - tm
# use superlu and arpack tm = time.time() pi_Q = QTpi(QT, guess, eps) tm_Q = time.time() - tm
# use a power method tm = time.time() pi_R = power_method(PT, guess, eps) tm_R = time.time() - tm
# make pngs matshow(pi_P.reshape(N2,N1)); savefig("pi_P.png") matshow(pi_Q.reshape(N2,N1)); savefig("pi_Q.png") matshow(pi_R.reshape(N2,N1)); savefig("pi_R.png")
print('distribution estimates:') print('P:', pi_P) print('Q:', pi_Q) print('R:', pi_R) print() print('computation times for the iterations:') print('P:', tm_P) print('Q:', tm_Q) print('R:', tm_R) print() print('violation of the invariant pi * P = pi:') print('P:', norm(PT*pi_P - pi_P, ord=1)) print('Q:', norm(PT*pi_Q - pi_Q, ord=1)) print('R:', norm(PT*pi_R - pi_R, ord=1)) print() print('violation of the invariant pi * Q = 0:') print('P:', norm(QT*pi_P, ord=1)) print('Q:', norm(QT*pi_Q, ord=1)) print('R:', norm(QT*pi_R, ord=1)) print() print('pngs:') print('P: pi_P.png') print('Q: pi_Q.png') print('R: pi_R.png') print()
---
settings: labda: 1.0 mu1: 1.1 mu2: 1.01 N1: 400 N2: 400
precalculation times: 0.753726005554 for some rate info 0.045077085495 to make the transition rate matrix 0.0467808246613 to make the uniformized trans prob matrix
distribution estimates: P: [ 0.00501086 0.00455523 0.00414086 ..., -0. -0. -0. ] Q: [ 9.00503398e-04 8.18606355e-04 7.44190344e-04 ..., 3.62901458e-07 3.59308342e-07 3.55750817e-07] R: [ 0.0053192 0.00483551 0.00439555 ..., 0. 0. 0. ]
computation times for the iterations: P: 2.88090801239 Q: 4.26303100586 R: 0.834770202637
violation of the invariant pi * P = pi: P: 0.0016872009599 Q: 2.74708912485e-08 R: 0.000994144099758
violation of the invariant pi * Q = 0: P: 0.00525244218029 Q: 8.55199061005e-08 R: 0.0030948799384
pngs: P: pi_P.png Q: pi_Q.png R: pi_R.png _______________________________________________ SciPy-Dev mailing list SciPy-Dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
participants (4)
-
alex
-
Moritz Beber
-
nicky van foreest
-
Pauli Virtanen