For broadcasting, can m by n by k matrix be multiplied with n by k matrix?

Hello all, Can an m x n x k matrix be multiplied with n x k matrix? Looking at the Numpy doc page 46 ( https://docs.scipy.org/doc/numpy-1.11.0/numpy-user-1.11.0.pdf), it should work. It says the following: A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 But, the rule did not work for me. Here's my toy example:
Am I miss reading something? Thank you in advance!

On Sat, Apr 20, 2019 at 12:24 AM C W <tmrsg11@gmail.com> wrote:
Am I miss reading something? Thank you in advance!
Hey, You are missing that the broadcasting rules typically apply to arithmetic operations and methods that are specified explicitly to broadcast. There is no mention of broadcasting in the docs of np.dot [1], and its behaviour is a bit more complicated. Specifically for multidimensional arrays (which you have), the doc says If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last axis of a and the second-to-last axis of b: dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) So your (3,4,5) @ (3,5) would want to collapse the 4-length axis of `a` with the 3-length axis of `b`; this won't work. If you want elementwise multiplication according to the broadcasting rules, just use `a * b`:
[1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html

Thanks, you are right. I overlooked it's for addition. The original problem was that I have matrix X (RBG image, 3 layers), and vector y. I wanted to do np(X, y.T).
But, np.dot() gives me four axis shown below,
The fourth axis is unexpected. Should y.shape be (28, 28), not (1, 28, 28)? Thanks again! On Fri, Apr 19, 2019 at 6:39 PM Andras Deak <deak.andris@gmail.com> wrote:

I agree with Stephan, I can never remember how np.dot works for multidimensional arrays, and I rarely need its behaviour. Einsum, on the other hand, is both intuitive to me and more general. Anyway, yes, if y has a leading singleton dimension then its transpose will have shape (28,28,1) which leads to that unexpected trailing singleton dimension. If you look at how the shape changes in each step (first transpose, then np.dot) you can see that everything's doing what it should (i.e. what you tell it to do). With np.einsum you'd have to consider that you want to pair the last axis of X with the first axis of y.T, i.e. the last axis of y (assuming the latter has only two axes, so it doesn't have that leading singleton). This would correspond to the rule 'abc,dc->abd', or if you want to allow arbitrary leading dimensions on y, 'abc,...c->ab...':
András On Sat, Apr 20, 2019 at 1:06 AM Stephan Hoyer <shoyer@gmail.com> wrote:

On Sat, Apr 20, 2019 at 12:24 AM C W <tmrsg11@gmail.com> wrote:
Am I miss reading something? Thank you in advance!
Hey, You are missing that the broadcasting rules typically apply to arithmetic operations and methods that are specified explicitly to broadcast. There is no mention of broadcasting in the docs of np.dot [1], and its behaviour is a bit more complicated. Specifically for multidimensional arrays (which you have), the doc says If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last axis of a and the second-to-last axis of b: dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) So your (3,4,5) @ (3,5) would want to collapse the 4-length axis of `a` with the 3-length axis of `b`; this won't work. If you want elementwise multiplication according to the broadcasting rules, just use `a * b`:
[1]: https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html

Thanks, you are right. I overlooked it's for addition. The original problem was that I have matrix X (RBG image, 3 layers), and vector y. I wanted to do np(X, y.T).
But, np.dot() gives me four axis shown below,
The fourth axis is unexpected. Should y.shape be (28, 28), not (1, 28, 28)? Thanks again! On Fri, Apr 19, 2019 at 6:39 PM Andras Deak <deak.andris@gmail.com> wrote:

I agree with Stephan, I can never remember how np.dot works for multidimensional arrays, and I rarely need its behaviour. Einsum, on the other hand, is both intuitive to me and more general. Anyway, yes, if y has a leading singleton dimension then its transpose will have shape (28,28,1) which leads to that unexpected trailing singleton dimension. If you look at how the shape changes in each step (first transpose, then np.dot) you can see that everything's doing what it should (i.e. what you tell it to do). With np.einsum you'd have to consider that you want to pair the last axis of X with the first axis of y.T, i.e. the last axis of y (assuming the latter has only two axes, so it doesn't have that leading singleton). This would correspond to the rule 'abc,dc->abd', or if you want to allow arbitrary leading dimensions on y, 'abc,...c->ab...':
András On Sat, Apr 20, 2019 at 1:06 AM Stephan Hoyer <shoyer@gmail.com> wrote:
participants (3)
-
Andras Deak
-
C W
-
Stephan Hoyer