This is related to a question I posted earlier. Suppose I have array A with dimensions n x m x l and array x with dimensions m x l. Interpret this as an array of l nxm matrices and and array of l m dimensional vectors. I wish to compute the matrix- vector product A[:,:,k] x[:,k] for each k = 0,... l -1. I discovered that I could accomplish this with the command np.diagonal(np.tensordot(A, k, axes=(1,0)), axis1= 1, axis2 = 2) The tensordot command gives me A_{ijk}x_{jl} = C_{ikl} And the diagonal command grabs the entries in array C where k=l. Is this the "optimal" way to make this calculation in numpy? It certainly makes for nice, clean code, but is it the fastest I can get? -gideon
This is not the first time this issue is raised here. You may try this piece of code, which may take less memory: (A*x).sum(axis=1).T Nadav -----הודעה מקורית----- מאת: numpy-discussion-bounces@scipy.org בשם Gideon Simpson נשלח: א 18-ינואר-09 07:30 אל: Discussion of Numerical Python נושא: [Numpy-discussion] efficient usage of tensordot This is related to a question I posted earlier. Suppose I have array A with dimensions n x m x l and array x with dimensions m x l. Interpret this as an array of l nxm matrices and and array of l m dimensional vectors. I wish to compute the matrix- vector product A[:,:,k] x[:,k] for each k = 0,... l -1. I discovered that I could accomplish this with the command np.diagonal(np.tensordot(A, k, axes=(1,0)), axis1= 1, axis2 = 2) The tensordot command gives me A_{ijk}x_{jl} = C_{ikl} And the diagonal command grabs the entries in array C where k=l. Is this the "optimal" way to make this calculation in numpy? It certainly makes for nice, clean code, but is it the fastest I can get? -gideon _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
participants (2)
-
Gideon Simpson
-
Nadav Horesh