
numpy.dot.__doc__ matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over
the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. Does numpy support summing over arbitrary dimensions, as in tensor calculus ? I could cook up something that uses transpose and dot, but it's reasonably tricky i think :) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com

Simon Burton wrote:
numpy.dot.__doc__
matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated.
Does numpy support summing over arbitrary dimensions, as in tensor calculus ?
I could cook up something that uses transpose and dot, but it's reasonably tricky i think :)
I've just added tensordot to NumPy (adapted and enhanced from numarray). It allows you to sum over an arbitrary number of axes. It uses a 2-d dot-product internally as that is optimized if you have a fast blas installed. Example: If a.shape is (3,4,5) and b.shape is (4,3,2) Then tensordot(a, b, axes=([1,0],[0,1])) returns a (5,2) array which is equivalent to the code: c = zeros((5,2)) for i in range(5): for j in range(2): for k in range(3): for l in range(4): c[i,j] += a[k,l,i]*b[l,k,j] -Travis

On 8/29/06, Travis Oliphant <oliphant.travis@ieee.org> wrote:
Example:
If a.shape is (3,4,5) and b.shape is (4,3,2)
Then
tensordot(a, b, axes=([1,0],[0,1]))
returns a (5,2) array which is equivalent to the code:
c = zeros((5,2)) for i in range(5): for j in range(2): for k in range(3): for l in range(4): c[i,j] += a[k,l,i]*b[l,k,j]
That's pretty cool.
From there it shouldn't be too hard to make a wrapper that would allow you to write c_ji = a_kli * b_lkj (w/sum over k and l) like:
tensordot_ez(a,'kli', b,'lkj', out='ji') or maybe with numexpr-like syntax: tensor_expr('_ji = a_kli * b_lkj') [pulling a and b out of the globals()/locals()] Might be neat to be able to build a callable function for repeated reuse: tprod = tensor_func('_ji = [0]_kli * [1]_lkj') # [0] and [1] become parameters 0 and 1 c = tprod(a, b) or to pass the output through a (potentially reused) array argument: tprod1 = tensor_func('[0]_ji = [1]_kli * [2]_lkj') tprod1(c, a, b) --bb
participants (3)
-
Bill Baxter
-
Simon Burton
-
Travis Oliphant