Hello, I have a large number of points (shape (n,3)), and a matching number of 3x3 matrices (shape (n,3,3)), and I want to compute the product of each matrix times the corresponding point. I can't see a way to do this operation with dot or tensordot, since these routines either sum across an index or treat it as independent between the two arguments. Is this case, I can use the fact that 3 is small to do it, but is there a clean way for handle this kind of "array of matrix" situation in general? Thanks, Geoffrey
On Thu, Feb 28, 2008 at 4:34 PM, Geoffrey Irving <irving@pixar.com> wrote:
Hello,
I have a large number of points (shape (n,3)), and a matching number of 3x3 matrices (shape (n,3,3)), and I want to compute the product of each matrix times the corresponding point.
I can't see a way to do this operation with dot or tensordot, since these routines either sum across an index or treat it as independent between the two arguments.
Is this case, I can use the fact that 3 is small to do it, but is there a clean way for handle this kind of "array of matrix" situation in general?
For dot-products, yes. We can use broadcasting followed by axis summation. In [20]: from numpy import * In [21]: n = 5 In [22]: A = arange(n*3*3).reshape([n,3,3]) In [23]: b = 10*arange(n*3).reshape([n,3]) In [24]: A Out[24]: array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]], [[27, 28, 29], [30, 31, 32], [33, 34, 35]], [[36, 37, 38], [39, 40, 41], [42, 43, 44]]]) In [25]: b Out[25]: array([[ 0, 10, 20], [ 30, 40, 50], [ 60, 70, 80], [ 90, 100, 110], [120, 130, 140]]) In [26]: for i in range(n): ....: print dot(A[i], b[i]) ....: ....: [ 50 140 230] [1220 1580 1940] [4010 4640 5270] [ 8420 9320 10220] [14450 15620 16790] In [27]: (A * b.reshape([n, 1, 3])).sum(axis=-1) Out[27]: array([[ 50, 140, 230], [ 1220, 1580, 1940], [ 4010, 4640, 5270], [ 8420, 9320, 10220], [14450, 15620, 16790]]) The magic is in In[27]. We reshape the array of vectors to be compatible with the shape of the array of matrices. When we multiply the two together, it is as if we multiplied two (n,3,3) matrices, the latter being the vectors repeated 3 times. Then we sum along the rows of each of the product matrices to get the desired dot product. PS: Are you perchance the Geoffrey Irving I knew at CalTech, class of '03? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
On Thu, Feb 28, 2008 at 05:57:29PM -0600, Robert Kern wrote:
On Thu, Feb 28, 2008 at 4:34 PM, Geoffrey Irving <irving@pixar.com> wrote:
Hello,
I have a large number of points (shape (n,3)), and a matching number of 3x3 matrices (shape (n,3,3)), and I want to compute the product of each matrix times the corresponding point.
I can't see a way to do this operation with dot or tensordot, since these routines either sum across an index or treat it as independent between the two arguments.
Is this case, I can use the fact that 3 is small to do it, but is there a clean way for handle this kind of "array of matrix" situation in general?
For dot-products, yes. We can use broadcasting followed by axis summation.
In [20]: from numpy import *
In [21]: n = 5
In [22]: A = arange(n*3*3).reshape([n,3,3])
In [23]: b = 10*arange(n*3).reshape([n,3])
In [24]: A Out[24]: array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]],
[[ 9, 10, 11], [12, 13, 14], [15, 16, 17]],
[[18, 19, 20], [21, 22, 23], [24, 25, 26]],
[[27, 28, 29], [30, 31, 32], [33, 34, 35]],
[[36, 37, 38], [39, 40, 41], [42, 43, 44]]])
In [25]: b Out[25]: array([[ 0, 10, 20], [ 30, 40, 50], [ 60, 70, 80], [ 90, 100, 110], [120, 130, 140]])
In [26]: for i in range(n): ....: print dot(A[i], b[i]) ....: ....: [ 50 140 230] [1220 1580 1940] [4010 4640 5270] [ 8420 9320 10220] [14450 15620 16790]
In [27]: (A * b.reshape([n, 1, 3])).sum(axis=-1) Out[27]: array([[ 50, 140, 230], [ 1220, 1580, 1940], [ 4010, 4640, 5270], [ 8420, 9320, 10220], [14450, 15620, 16790]])
The magic is in In[27]. We reshape the array of vectors to be compatible with the shape of the array of matrices. When we multiply the two together, it is as if we multiplied two (n,3,3) matrices, the latter being the vectors repeated 3 times. Then we sum along the rows of each of the product matrices to get the desired dot product.
Thanks! That'll do nicely. For large matrices, that could be problematic due to the blowup in intermediate memory, but on the other hand for large matrices a loop through the toplevel index wouldn't add much cost.
PS: Are you perchance the Geoffrey Irving I knew at CalTech, class of '03?
Yep. That would answer the question I had when I started reading this email. However, it's spelled Caltech, not CalTech! Geoffrey
On Thu, Feb 28, 2008 at 6:43 PM, Geoffrey Irving <irving@pixar.com> wrote:
The magic is in In[27]. We reshape the array of vectors to be compatible with the shape of the array of matrices. When we multiply the two together, it is as if we multiplied two (n,3,3) matrices, the latter being the vectors repeated 3 times. Then we sum along the rows of each of the product matrices to get the desired dot product.
Thanks! That'll do nicely.
For large matrices, that could be problematic due to the blowup in intermediate memory, but on the other hand for large matrices a loop through the toplevel index wouldn't add much cost.
If you really want to save memory and you can destroy A, then you could do the multiplication in-place. If you really want to get fancy and can destroy b, you can use it as storage for the summation output, too. In [11]: A *= b.reshape([n,1,3]) In [12]: c = A.sum(axis=-1, out=b) In [13]: b Out[13]: array([[ 50, 140, 230], [ 1220, 1580, 1940], [ 4010, 4640, 5270], [ 8420, 9320, 10220], [14450, 15620, 16790]]) In [14]: c is b Out[14]: True
PS: Are you perchance the Geoffrey Irving I knew at CalTech, class of '03?
Yep. That would answer the question I had when I started reading this email. However, it's spelled Caltech, not CalTech!
Yeah, yeah, yeah. The Wikis, they have taken over my finger ReFlexes. NumPy Rudds += 1. Take that, Tim Hochberg! :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
On Thu, Feb 28, 2008 at 06:55:11PM -0600, Robert Kern wrote:
On Thu, Feb 28, 2008 at 6:43 PM, Geoffrey Irving <irving@pixar.com> wrote:
The magic is in In[27]. We reshape the array of vectors to be compatible with the shape of the array of matrices. When we multiply the two together, it is as if we multiplied two (n,3,3) matrices, the latter being the vectors repeated 3 times. Then we sum along the rows of each of the product matrices to get the desired dot product.
Thanks! That'll do nicely.
For large matrices, that could be problematic due to the blowup in intermediate memory, but on the other hand for large matrices a loop through the toplevel index wouldn't add much cost.
If you really want to save memory and you can destroy A, then you could do the multiplication in-place. If you really want to get fancy and can destroy b, you can use it as storage for the summation output, too.
In [11]: A *= b.reshape([n,1,3])
In [12]: c = A.sum(axis=-1, out=b)
In [13]: b Out[13]: array([[ 50, 140, 230], [ 1220, 1580, 1940], [ 4010, 4640, 5270], [ 8420, 9320, 10220], [14450, 15620, 16790]])
In [14]: c is b Out[14]: True
By large I meant if the array shapes are (n,i,j), (n,j,k), where all indices are large. The permanent memory cost is O(nij+njk), but O(nijk) flops are required, and the broadcast/sum solution would require intermediate storage for each one. It doesn't matter in my case though, so it's just a curiosity.
PS: Are you perchance the Geoffrey Irving I knew at CalTech, class of '03?
Yep. That would answer the question I had when I started reading this email. However, it's spelled Caltech, not CalTech!
Yeah, yeah, yeah. The Wikis, they have taken over my finger ReFlexes.
NumPy Rudds += 1. Take that, Tim Hochberg! :-)
There are a bunch of techers here (from further back than us), but I may be the only one actively using numpy at the moment. It would be very painful to have to lug large mesh data in python around without it. And I wouldn't have the pleasure of writing stuff like this: # triangulate a polygon mesh, given as (counts,vertices,points) # counts is the number of vertices in each polygon, and vertices # are the concatenated vertex indices of each polygon triangleFace = numpy.arange(len(counts)).repeat(counts-2) vertex1 = (numpy.add.accumulate(counts)-counts)[triangleFace] vertex2 = numpy.arange(len(triangleFace)) + 2*triangleFace + 1 triangles = vertices[numpy.vstack([vertex1,vertex2,vertex2+1]).transpose()] Geoffrey
participants (2)
-
Geoffrey Irving
-
Robert Kern