Two questions about PEP 465 dot product
1. Is there a simple expression using existing numpy functions that implements PEP 465 semantics for @? 2. Suppose I have a function that takes two vectors x and y, and a matrix M and returns x.dot(M.dot(y)). I would like to "vectorize" this function so that it works with x and y of any ndim >= 1 and M of any ndim >= 2 treating multi-dimensional x and y as arrays of vectors and M as an array of matrices (broadcasting as necessary). The result should be an array of xMy products. How would I achieve that using PEP 465's @?
On Thu, May 21, 2015 at 6:06 PM, Alexander Belopolsky
1. Is there a simple expression using existing numpy functions that implements PEP 465 semantics for @?
Not yet.
2. Suppose I have a function that takes two vectors x and y, and a matrix M and returns x.dot(M.dot(y)). I would like to "vectorize" this function so that it works with x and y of any ndim >= 1 and M of any ndim >= 2 treating multi-dimensional x and y as arrays of vectors and M as an array of matrices (broadcasting as necessary). The result should be an array of xMy products. How would I achieve that using PEP 465's @?
(x[..., np.newaxis, :] @ M @ y[..., :, np.newaxis])[..., 0, 0] Alternatively, you might prefer something like this (though it won't yet take advantage of BLAS): np.einsum("...i,...ij,...j", x, M, y) Alternatively, there's been some discussion of the possibility of adding specialized gufuncs for broadcasted vector-vector, vector-matrix, matrix-vector multiplication, which wouldn't do the magic vector promotion that dot and @ do. -n -- Nathaniel J. Smith -- http://vorpus.org
On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith
.. there's been some discussion of the possibility of adding specialized gufuncs for broadcasted vector-vector, vector-matrix, matrix-vector multiplication, which wouldn't do the magic vector promotion that dot and @ do.
This would be nice. What I would like to see is some consistency between multi-matrix support in linalg methods and dot. For example, when A is a matrix and b is a vector and a = linalg.solve(A, b) then dot(A, a) returns b, but if either or both A and b are stacks, this invariant does not hold. I would like to see a function (say xdot) that I can use instead of dot and have xdot(A, a) return b whenever a = linalg.solve(A, b). Similarly, if w,v = linalg.eig(A), then dot(A,v) returns w * v, but only if A is 2d.
On May 22, 2015 11:00 AM, "Alexander Belopolsky"
On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith
wrote: .. there's been some discussion of the possibility of
adding specialized gufuncs for broadcasted vector-vector, vector-matrix, matrix-vector multiplication, which wouldn't do the magic vector promotion that dot and @ do.
This would be nice. What I would like to see is some consistency between
multi-matrix
support in linalg methods and dot.
For example, when A is a matrix and b is a vector and
a = linalg.solve(A, b)
then
dot(A, a) returns b, but if either or both A and b are stacks, this invariant does not hold. I would like to see a function (say xdot) that I can use instead of dot and have xdot(A, a) return b whenever a = linalg.solve(A, b).
I believe this equivalence holds if xdot(x, y) = x @ y, because solve() does follow the pep 465 semantics for shape handling. Or at least, it's intended to. Of course we will also expose pep 465 matmul semantics under some name that doesn't require the new syntax (probably not "xdot" though ;-)).
Similarly, if w,v = linalg.eig(A), then dot(A,v) returns w * v, but only if A is 2d.
Again A @ v I believe does the right thing, though I'm not positive -- you might need a swapaxes or matvec or something. Let us know if you work it out :-). Note that it still won't be equivalent to w * v because w * v doesn't broadcast the way you want :-). You need w[..., np.newaxis, :] * v, I think. -n
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy
whenever I see this discussion go on.
Ben Root
On Fri, May 22, 2015 at 2:23 PM, Nathaniel Smith
On May 22, 2015 11:00 AM, "Alexander Belopolsky"
wrote: On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith
wrote: .. there's been some discussion of the possibility of
adding specialized gufuncs for broadcasted vector-vector, vector-matrix, matrix-vector multiplication, which wouldn't do the magic vector promotion that dot and @ do.
This would be nice. What I would like to see is some consistency
between multi-matrix
support in linalg methods and dot.
For example, when A is a matrix and b is a vector and
a = linalg.solve(A, b)
then
dot(A, a) returns b, but if either or both A and b are stacks, this invariant does not hold. I would like to see a function (say xdot) that I can use instead of dot and have xdot(A, a) return b whenever a = linalg.solve(A, b).
I believe this equivalence holds if xdot(x, y) = x @ y, because solve() does follow the pep 465 semantics for shape handling. Or at least, it's intended to. Of course we will also expose pep 465 matmul semantics under some name that doesn't require the new syntax (probably not "xdot" though ;-)).
Similarly, if w,v = linalg.eig(A), then dot(A,v) returns w * v, but only if A is 2d.
Again A @ v I believe does the right thing, though I'm not positive -- you might need a swapaxes or matvec or something. Let us know if you work it out :-).
Note that it still won't be equivalent to w * v because w * v doesn't broadcast the way you want :-). You need w[..., np.newaxis, :] * v, I think.
-n
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On May 22, 2015 11:34 AM, "Benjamin Root"
At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy whenever I see this discussion go on. That does seem like a good idea, doesn't it. Following the principle that recently-confused users write the best docs, any interest in taking a shot at writing such a thing? -n
That assumes that the said recently-confused ever get to the point of
understanding it... and I personally don't do much matrix math work, so I
don't have the proper mental context. I just know that coworkers are going
to be coming to me asking questions because I am the de facto "python guy".
So, having a page I can point them to would be extremely valuable.
Ben Root
On Fri, May 22, 2015 at 4:05 PM, Nathaniel Smith
On May 22, 2015 11:34 AM, "Benjamin Root"
wrote: At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy whenever I see this discussion go on.
That does seem like a good idea, doesn't it. Following the principle that recently-confused users write the best docs, any interest in taking a shot at writing such a thing?
-n
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On May 22, 2015 1:26 PM, "Benjamin Root"
That assumes that the said recently-confused ever get to the point of
understanding it... Well, I don't think it's that complicated really. For whatever that's worth :-). My best attempt is here, anyway: https://www.python.org/dev/peps/pep-0465/#semantics The short version is, for 1d and 2d inputs it acts just like dot(). For higher dimension inputs like (i, j, n, m) it acts like any other gufunc (e.g., everything in np.linalg) -- it treats this as an i-by-j stack of n-by-m matrices and is vectorized over the i, j dimensions. And 0d inputs are an error. -n
Then add in broadcasting behavior...
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith
On May 22, 2015 1:26 PM, "Benjamin Root"
wrote: That assumes that the said recently-confused ever get to the point of
understanding it...
Well, I don't think it's that complicated really. For whatever that's worth :-). My best attempt is here, anyway:
https://www.python.org/dev/peps/pep-0465/#semantics
The short version is, for 1d and 2d inputs it acts just like dot(). For higher dimension inputs like (i, j, n, m) it acts like any other gufunc (e.g., everything in np.linalg) -- it treats this as an i-by-j stack of n-by-m matrices and is vectorized over the i, j dimensions. And 0d inputs are an error.
-n
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On May 22, 2015 2:40 PM, "Benjamin Root"
Then add in broadcasting behavior...
Vectorized functions broadcast over the vectorized dimensions, there's nothing special about @ in this regard. -n
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith
wrote: On May 22, 2015 1:26 PM, "Benjamin Root"
wrote: That assumes that the said recently-confused ever get to the point of
understanding it...
Well, I don't think it's that complicated really. For whatever that's
worth :-). My best attempt is here, anyway:
https://www.python.org/dev/peps/pep-0465/#semantics
The short version is, for 1d and 2d inputs it acts just like dot(). For
higher dimension inputs like (i, j, n, m) it acts like any other gufunc (e.g., everything in np.linalg) -- it treats this as an i-by-j stack of n-by-m matrices and is vectorized over the i, j dimensions. And 0d inputs are an error.
-n
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, May 21, 2015 at 7:06 PM, Alexander Belopolsky
1. Is there a simple expression using existing numpy functions that implements PEP 465 semantics for @?
2. Suppose I have a function that takes two vectors x and y, and a matrix M and returns x.dot(M.dot(y)). I would like to "vectorize" this function so that it works with x and y of any ndim >= 1 and M of any ndim >= 2 treating multi-dimensional x and y as arrays of vectors and M as an array of matrices (broadcasting as necessary). The result should be an array of xMy products. How would I achieve that using PEP 465's @?
If you are willing to run Python 3.5 (use 3.6.0a3, a4 crawls with the bugs), you can use gh-5878 https://github.com/numpy/numpy/pull/5878. The override mechanisms are still in process in Nathaniel's PR, so that may change. I'd welcome any feedback. Chuck
On Fri, May 22, 2015 at 12:02 AM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Thu, May 21, 2015 at 7:06 PM, Alexander Belopolsky
wrote: 1. Is there a simple expression using existing numpy functions that implements PEP 465 semantics for @?
2. Suppose I have a function that takes two vectors x and y, and a matrix M and returns x.dot(M.dot(y)). I would like to "vectorize" this function so that it works with x and y of any ndim >= 1 and M of any ndim >= 2 treating multi-dimensional x and y as arrays of vectors and M as an array of matrices (broadcasting as necessary). The result should be an array of xMy products. How would I achieve that using PEP 465's @?
If you are willing to run Python 3.5 (use 3.6.0a3, a4 crawls with the bugs), you can use gh-5878 https://github.com/numpy/numpy/pull/5878. The override mechanisms are still in process in Nathaniel's PR, so that may change. I'd welcome any feedback.
Oops, make the 3.5.0a3. Chuck
participants (4)
-
Alexander Belopolsky
-
Benjamin Root
-
Charles R Harris
-
Nathaniel Smith