dot function or dot notation, matrices, arrays?
Is it possible to calculate a dot product in numpy by either notation
(a ^ b, where ^ is a possible notation) or calling a dot function
(dot(a,b)? I'm trying to use a column matrix for both "vectors".
Perhaps, I need to somehow change them to arrays?
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
On Fri, Dec 18, 2009 at 1:51 PM, Wayne Watson
Is it possible to calculate a dot product in numpy by either notation (a ^ b, where ^ is a possible notation) or calling a dot function (dot(a,b)? I'm trying to use a column matrix for both "vectors". Perhaps, I need to somehow change them to arrays?
Does this do what you want?
x matrix([[1], [2], [3]])
x.T * x matrix([[14]]) np.dot(x.T,x) matrix([[14]])
That should do it. Thanks. How do I get the scalar result by itself? Keith Goodman wrote:
On Fri, Dec 18, 2009 at 1:51 PM, Wayne Watson
wrote: Is it possible to calculate a dot product in numpy by either notation (a ^ b, where ^ is a possible notation) or calling a dot function (dot(a,b)? I'm trying to use a column matrix for both "vectors". Perhaps, I need to somehow change them to arrays?
Does this do what you want?
x
matrix([[1], [2], [3]])
x.T * x
matrix([[14]])
np.dot(x.T,x)
matrix([[14]])
NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
Very good. Is there a scalar product in numpy? Keith Goodman wrote:
On Fri, Dec 18, 2009 at 2:51 PM, Wayne Watson
wrote: That should do it. Thanks. How do I get the scalar result by itself?
np.dot(x.T,x)[0,0]
14
or
x = np.array([1,2,3]) np.dot(x,x)
14
NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
Well, they aren't quite the same. If a is the length of A, and b is the length of B, then a*b = A dot B* cos (theta). I'm still not familiar enough with numpy or math to know if there's some function that will produce a from A. It's easy enough to do, a = A(0)**2 + ..., but I would like to think it's a common enough need that there would be something available like sumsq(). Keith Goodman wrote:
On Fri, Dec 18, 2009 at 3:22 PM, Wayne Watson
wrote: Is there a scalar product in numpy?
Isn't that the same thing as a dot product? np.dot doesn't do what you want? _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
On Fri, Dec 18, 2009 at 3:40 PM, Wayne Watson
Well, they aren't quite the same. If a is the length of A, and b is the length of B, then a*b = A dot B* cos (theta). I'm still not familiar enough with numpy or math to know if there's some function that will produce a from A. It's easy enough to do, a = A(0)**2 + ..., but I would like to think it's a common enough need that there would be something available like sumsq().
In your usage, dot product and scalar product are synonymous: a = sqrt(A dot A) There are some contexts in which "scalar" product and "dot" product don't mean exactly the same thing (e.g., tensors, where "dot" is typically synonymous w/ "inner," which, in the general case, does not result in a scalar, or a multiplication-like functional where a function is mapped to a scalar, in which context we typically - but not uniformly - do not describe the product as a dot product) but unless you're working in one of those advanced contexts, scalar and dot are typically used interchangeably. In particular, IIUC, in NumPy, unless your using it to calculate a tensor product that doesn't result in a scalar, dot and scalar product are synonymous. DG
Keith Goodman wrote:
On Fri, Dec 18, 2009 at 3:22 PM, Wayne Watson
wrote: Is there a scalar product in numpy?
Isn't that the same thing as a dot product? np.dot doesn't do what you want? _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time) Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much better than that of rats and dolphins." -- Stanislas Dehaene, neurosurgeon
Web Page:
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Not quite. The point of the scalar product is to produce theta. My intended use is that found in calculus. Nevertheless, my question is how to produce the result in some set of functions that are close to minimal. I could finish this off by using the common definition found in a calculus book (sum of squares, loop, etc.), but, from where I stand--just getting into numpy, this is about discovering more about numpy, and math. However, it's not just an example. I''m working on a task in celestial computations that has a definite goal. The dot product is very useful to it, since the work is very oriented towards vectors and matrices. Surprisingly it doesn't seem to be available in numpy's bag of tricks. David Goldsmith wrote:
On Fri, Dec 18, 2009 at 3:40 PM, Wayne Watson
wrote: Well, they aren't quite the same. If a is the length of A, and b is the length of B, then a*b = A dot B* cos (theta). I'm still not familiar enough with numpy or math to know if there's some function that will produce a from A. It's easy enough to do, a = A(0)**2 + ..., but I would like to think it's a common enough need that there would be something available like sumsq().
In your usage, dot product and scalar product are synonymous:
a = sqrt(A dot A)
There are some contexts in which "scalar" product and "dot" product don't mean exactly the same thing (e.g., tensors, where "dot" is typically synonymous w/ "inner," which, in the general case, does not result in a scalar, or a multiplication-like functional where a function is mapped to a scalar, in which context we typically - but not uniformly - do not describe the product as a dot product) but unless you're working in one of those advanced contexts, scalar and dot are typically used interchangeably. In particular, IIUC, in NumPy, unless your using it to calculate a tensor product that doesn't result in a scalar, dot and scalar product are synonymous.
DG
Keith Goodman wrote:
On Fri, Dec 18, 2009 at 3:22 PM, Wayne Watson
wrote: Is there a scalar product in numpy?
Isn't that the same thing as a dot product? np.dot doesn't do what you want? _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time) Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much better than that of rats and dolphins." -- Stanislas Dehaene, neurosurgeon
Web Page:
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
On 12/18/2009 7:12 PM, Wayne Watson wrote:
The point of the scalar product is to produce theta.
As David said, that is just NumPy's `dot`.
a = np.array([0,2]) b = np.array([5,0]) theta = np.arccos(np.dot(a,b)/np.sqrt(np.dot(a,a)*np.dot(b,b))) theta 1.5707963267948966 theta/np.pi 0.5
hth, Alan Isaac
Nicely done. Alan G Isaac wrote:
On 12/18/2009 7:12 PM, Wayne Watson wrote:
The point of the scalar product is to produce theta.
As David said, that is just NumPy's `dot`.
a = np.array([0,2]) b = np.array([5,0]) theta = np.arccos(np.dot(a,b)/np.sqrt(np.dot(a,a)*np.dot(b,b))) theta
1.5707963267948966
theta/np.pi
0.5
hth, Alan Isaac
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
I'll amend that. I should have said, "Dot's all folks." -- Bugs Bunny
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
np.dot(x.flat, x.flat) _is exactly_ "sum of squares"(x.flat). Your
math education appears to have drawn a distinction between "dot
product" and "scalar product," that, when one is talking about
Euclidean vectors, just isn't there: in that context, they are one and
the same thing.
DG
On Fri, Dec 18, 2009 at 9:29 PM, Wayne Watson
I'll amend that. I should have said, "Dot's all folks." -- Bugs Bunny
-- Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time) Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much better than that of rats and dolphins." -- Stanislas Dehaene, neurosurgeon
Web Page:
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
I'm trying to compute the angle between two vectors in three dimensional space. For that, I need to use the "scalar (dot) product" , according to a calculus book (quoting the book) I'm holding in my hands right now. I've used dot() successfully to produce the necessary angle. My program works just fine. In the case of the dot(function), one must use np.dev(x.T,x), where x is 1x3. I'm not quite sure what your point is about dot()* unless you are thinking in some non-Euclidean fashion. One can form np.dot(a,b) with a and b arrays of 3x4 and 4x2 shape to arrive at a 3x2 array. That's definitely not a scalar. Is there a need for this sort of calculation in non-Euclidean geometry, which I have never dealt with? *Maybe it's about something else related to it. David Goldsmith wrote:
np.dot(x.flat, x.flat) _is exactly_ "sum of squares"(x.flat). Your math education appears to have drawn a distinction between "dot product" and "scalar product," that, when one is talking about Euclidean vectors, just isn't there: in that context, they are one and the same thing.
DG
On Fri, Dec 18, 2009 at 9:29 PM, Wayne Watson
wrote: I'll amend that. I should have said, "Dot's all folks." -- Bugs Bunny
-- Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time) Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much better than that of rats and dolphins." -- Stanislas Dehaene, neurosurgeon
Web Page:
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
Wayne Watson wrote:
I'm trying to compute the angle between two vectors in three dimensional space. For that, I need to use the "scalar (dot) product" , according to a calculus book (quoting the book) I'm holding in my hands right now. I've used dot() successfully to produce the necessary angle. My program works just fine.
In the case of the dot(function), one must use np.dev(x.T,x), where x is 1x3.
I'm not quite sure what your point is about dot()* unless you are thinking in some non-Euclidean fashion. One can form np.dot(a,b) with a and b arrays of 3x4 and 4x2 shape to arrive at a 3x2 array. That's definitely not a scalar. Is there a need for this sort of calculation in non-Euclidean geometry, which I have never dealt with?
There's a difference between 1D and 2D arrays that's important here. For a 1D array, np.dot(x.T, x) == np.dot(x, x), since there's only one dimension. NumPy is all about arrays, not matrices and vectors. Dag Sverre
*Maybe it's about something else related to it.
David Goldsmith wrote:
np.dot(x.flat, x.flat) _is exactly_ "sum of squares"(x.flat). Your math education appears to have drawn a distinction between "dot product" and "scalar product," that, when one is talking about Euclidean vectors, just isn't there: in that context, they are one and the same thing.
DG
On Fri, Dec 18, 2009 at 9:29 PM, Wayne Watson
wrote: I'll amend that. I should have said, "Dot's all folks." -- Bugs Bunny
-- Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time) Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much better than that of rats and dolphins." -- Stanislas Dehaene, neurosurgeon
Web Page:
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Dag Sverre
Dag Sverre Seljebotn wrote:
Wayne Watson wrote:
I'm trying to compute the angle between two vectors in three dimensional space. For that, I need to use the "scalar (dot) product" , according to a calculus book (quoting the book) I'm holding in my hands right now. I've used dot() successfully to produce the necessary angle. My program works just fine.
In the case of the dot(function), one must use np.dev(x.T,x), where x is 1x3.
I'm not quite sure what your point is about dot()* unless you are thinking in some non-Euclidean fashion. One can form np.dot(a,b) with a and b arrays of 3x4 and 4x2 shape to arrive at a 3x2 array. That's definitely not a scalar. Is there a need for this sort of calculation in non-Euclidean geometry, which I have never dealt with?
There's a difference between 1D and 2D arrays that's important here. For a 1D array, np.dot(x.T, x) == np.dot(x, x), since there's only one dimension.
A 4x1, 1x7, and 1x5 would be examples of a 1D array or matrix, right? Are you saying that instead of using a rotational matrix like theta = 5.0 # degrees m1 = matrix([[2] ,[5]]) rotCW = matrix([ [cosD(theta), sinD(theta)], [-sinD(theta), cosD(theta)] ]) m2= rotCW*m1 m1=np.array(m1) m2=np.array(m2) that I should use a 2-D array for rotCW? So why does numpy have a matrix class? Is the class only used when working with matplotlib? To get the scalar value (sum of squares) I had to use a transpose, T, on one argument.
NumPy is all about arrays, not matrices and vectors.
Dag Sverre
*Maybe it's about something else related to it.
David Goldsmith wrote:
np.dot(x.flat, x.flat) _is exactly_ "sum of squares"(x.flat). Your math education appears to have drawn a distinction between "dot product" and "scalar product," that, when one is talking about Euclidean vectors, just isn't there: in that context, they are one and the same thing.
DG
On Fri, Dec 18, 2009 at 9:29 PM, Wayne Watson
wrote: I'll amend that. I should have said, "Dot's all folks." -- Bugs Bunny
-- Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time) Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much better than that of rats and dolphins." -- Stanislas Dehaene, neurosurgeon
Web Page:
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
On 12/19/2009 11:45 AM, Wayne Watson wrote:
A 4x1, 1x7, and 1x5 would be examples of a 1D array or matrix, right?
Are you saying that instead of using a rotational matrix ... that I should use a 2-D array for rotCW? So why does numpy have a matrix class? Is the class only used when working with matplotlib?
To get the scalar value (sum of squares) I had to use a transpose, T, on one argument.
At this point, you have raised some long standing issues. There are a couple standard replies people give to some of them. E.g., 1. don't use matrices, OR 2. don't mix the use of matrices and arrays Matrices are *always* 2d (e.g., a "row vector" or a "column vector" is 2d). So in fact you should find it quite natural that that transpose was needed. Matrices change * to matrix multiplication and ** to matrix exponentiation. I find this very convenient, especially in a teaching setting, so I use NumPy matrices all the time. Many on this list avoid them completely. Again, if you want a *scalar* as the product of vectors for which you created matrix objects (e.g., a and b), you can just use flat: np.dot(a.flat,b.flat) hth, Alan Isaac
Yes, flat sounds useful here. However, numpy isn't bending over backwards to tie in conventional mathematical language into it. I don't recall flat in any calculus books. :-) Maybe I've been away so long from it, that it is a common math concept? Although I doubt that. Alan G Isaac wrote:
On 12/19/2009 11:45 AM, Wayne Watson wrote:
A 4x1, 1x7, and 1x5 would be examples of a 1D array or matrix, right?
Are you saying that instead of using a rotational matrix ... that I should use a 2-D array for rotCW? So why does numpy have a matrix class? Is the class only used when working with matplotlib?
To get the scalar value (sum of squares) I had to use a transpose, T, on one argument.
At this point, you have raised some long standing issues. There are a couple standard replies people give to some of them. E.g.,
1. don't use matrices, OR 2. don't mix the use of matrices and arrays
Matrices are *always* 2d (e.g., a "row vector" or a "column vector" is 2d). So in fact you should find it quite natural that that transpose was needed. Matrices change * to matrix multiplication and ** to matrix exponentiation. I find this very convenient, especially in a teaching setting, so I use NumPy matrices all the time. Many on this list avoid them completely.
Again, if you want a *scalar* as the product of vectors for which you created matrix objects (e.g., a and b), you can just use flat: np.dot(a.flat,b.flat)
hth, Alan Isaac _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
On Sat, Dec 19, 2009 at 10:38 AM, Wayne Watson wrote: Yes, flat sounds useful here. However, numpy isn't bending over
backwards to tie in conventional mathematical language into it.
I don't recall flat in any calculus books. :-) Maybe I've been away so
long from it, that it is a common math concept? Although I doubt that. Flat is a programming concept. Programming and mathematics have some
overlap, but they aren't the same by any means.
Chuck
That's for sure! :-) Charles R Harris wrote:
On Sat, Dec 19, 2009 at 10:38 AM, Wayne Watson
mailto:sierra_mtnview@sbcglobal.net> wrote: Yes, flat sounds useful here. However, numpy isn't bending over backwards to tie in conventional mathematical language into it. I don't recall flat in any calculus books. :-) Maybe I've been away so long from it, that it is a common math concept? Although I doubt that.
Flat is a programming concept. Programming and mathematics have some overlap, but they aren't the same by any means.
Chuck
------------------------------------------------------------------------
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
Wayne Watson wrote:
Yes, flat sounds useful here. However, numpy isn't bending over backwards to tie in conventional mathematical language into it.
exactly -- it isn't bending over at all! (well a little -- see below). numpy was designed for general purpose computational needs, not any one branch of math. nd-arrays are very useful for lots of things. In contrast, Matlab, for instance, was originally designed to be an easy front-end to linear algebra package. Personally, when I used Matlab, I found that very awkward -- I was usually writing 100s of lines of code that had nothing to do with linear algebra, for every few lines that actually did matrix math. So I much prefer numpy's way -- the linear algebra lines of code are longer an more awkward, but the rest is much better. The Matrix class is the exception to this: is was written to provide a natural way to express linear algebra. However, things get a bit tricky when you mix matrices and arrays, and even when sticking with matrices there are confusions and limitations -- how do you express a row vs a column vector? what do you get when you iterate over a matrix? etc. There has been a bunch of discussion about these issues, a lot of good ideas, a little bit of consensus about how to improve it, but no one with the skill to do it has enough motivation to do it. As for your problem, I think a 3-d euclidean vector is well expressed as a (3,) shape array, and then you don't need flat, etc. In [6]: v1 = np.array((1,2,3), dtype=np.float) In [7]: v2 = np.array((3,1,2), dtype=np.float) In [8]: np.dot(v1,v2) Out[8]: 11.0 -Chris
I don't recall flat in any calculus books. :-) Maybe I've been away so long from it, that it is a common math concept? Although I doubt that.
Alan G Isaac wrote:
On 12/19/2009 11:45 AM, Wayne Watson wrote:
A 4x1, 1x7, and 1x5 would be examples of a 1D array or matrix, right?
Are you saying that instead of using a rotational matrix ... that I should use a 2-D array for rotCW? So why does numpy have a matrix class? Is the class only used when working with matplotlib?
To get the scalar value (sum of squares) I had to use a transpose, T, on one argument.
At this point, you have raised some long standing issues. There are a couple standard replies people give to some of them. E.g.,
1. don't use matrices, OR 2. don't mix the use of matrices and arrays
Matrices are *always* 2d (e.g., a "row vector" or a "column vector" is 2d). So in fact you should find it quite natural that that transpose was needed. Matrices change * to matrix multiplication and ** to matrix exponentiation. I find this very convenient, especially in a teaching setting, so I use NumPy matrices all the time. Many on this list avoid them completely.
Again, if you want a *scalar* as the product of vectors for which you created matrix objects (e.g., a and b), you can just use flat: np.dot(a.flat,b.flat)
hth, Alan Isaac _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
I guess I'll become accustomed to it over time. I have some interesting things to do for which I will need the facilities of numpy. I realized where I got into trouble with some of this. I was not differentiating between the dimensionality of space and that of a matrix or array. I haven't had to crank out math and computer work for quite awhile. Further, I've been doing a lot of reading on the Big Bang, and the dimensionality of space. I'm presently strongly biased towards thinking about space. For example, when I say 2D, I'm thinking of plane geometry space, and 3D as the world we live in. Thanks to all on this thread. Christopher Barker wrote:
Wayne Watson wrote:
Yes, flat sounds useful here. However, numpy isn't bending over backwards to tie in conventional mathematical language into it.
exactly -- it isn't bending over at all! (well a little -- see below). numpy was designed for general purpose computational needs, not any one branch of math. nd-arrays are very useful for lots of things. In contrast, Matlab, for instance, was originally designed to be an easy front-end to linear algebra package. Personally, when I used Matlab, I found that very awkward -- I was usually writing 100s of lines of code that had nothing to do with linear algebra, for every few lines that actually did matrix math. So I much prefer numpy's way -- the linear algebra lines of code are longer an more awkward, but the rest is much better.
The Matrix class is the exception to this: is was written to provide a natural way to express linear algebra. However, things get a bit tricky when you mix matrices and arrays, and even when sticking with matrices there are confusions and limitations -- how do you express a row vs a column vector? what do you get when you iterate over a matrix? etc.
There has been a bunch of discussion about these issues, a lot of good ideas, a little bit of consensus about how to improve it, but no one with the skill to do it has enough motivation to do it.
As for your problem, I think a 3-d euclidean vector is well expressed as a (3,) shape array, and then you don't need flat, etc.
In [6]: v1 = np.array((1,2,3), dtype=np.float)
In [7]: v2 = np.array((3,1,2), dtype=np.float)
In [8]: np.dot(v1,v2) Out[8]: 11.0
-Chris
I don't recall flat in any calculus books. :-) Maybe I've been away so long from it, that it is a common math concept? Although I doubt that.
Alan G Isaac wrote:
On 12/19/2009 11:45 AM, Wayne Watson wrote:
A 4x1, 1x7, and 1x5 would be examples of a 1D array or matrix, right?
Are you saying that instead of using a rotational matrix ... that I should use a 2-D array for rotCW? So why does numpy have a matrix class? Is the class only used when working with matplotlib?
To get the scalar value (sum of squares) I had to use a transpose, T, on one argument.
At this point, you have raised some long standing issues. There are a couple standard replies people give to some of them. E.g.,
1. don't use matrices, OR 2. don't mix the use of matrices and arrays
Matrices are *always* 2d (e.g., a "row vector" or a "column vector" is 2d). So in fact you should find it quite natural that that transpose was needed. Matrices change * to matrix multiplication and ** to matrix exponentiation. I find this very convenient, especially in a teaching setting, so I use NumPy matrices all the time. Many on this list avoid them completely.
Again, if you want a *scalar* as the product of vectors for which you created matrix objects (e.g., a and b), you can just use flat: np.dot(a.flat,b.flat)
hth, Alan Isaac _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
On Sat, Dec 19, 2009 at 11:50 AM, Wayne Watson wrote: I guess I'll become accustomed to it over time. I have some interesting
things to do for which I will need the facilities of numpy. I realized where I got into trouble with some of this. I was not
differentiating between the dimensionality of space and that of a matrix
or array. I haven't had to crank out math and computer work for quite
awhile. Further, I've been doing a lot of reading on the Big Bang, and
the dimensionality of space. I'm presently strongly biased towards
thinking about space. For example, when I say 2D, I'm thinking of plane
geometry space, and 3D as the world we live in. Thanks to all on this thread. Ah, you got confused between number of elements (spatial dimension) vs
number of indices (programming dimensions). Programming dimensions are more
like the dimensions of a box or container, i.e., width x height (2
dimensional array) or width x height x depth (three dimensional array). You
can put stuff in a container and the dimensions tell you how it is arranged
inside.
Chuck
Christopher Barker wrote:
Wayne Watson wrote:
Yes, flat sounds useful here. However, numpy isn't bending over backwards to tie in conventional mathematical language into it.
exactly -- it isn't bending over at all! (well a little -- see below). numpy was designed for general purpose computational needs, not any one branch of math. nd-arrays are very useful for lots of things. In contrast, Matlab, for instance, was originally designed to be an easy front-end to linear algebra package. Personally, when I used Matlab, I found that very awkward -- I was usually writing 100s of lines of code that had nothing to do with linear algebra, for every few lines that actually did matrix math. So I much prefer numpy's way -- the linear algebra lines of code are longer an more awkward, but the rest is much better.
The Matrix class is the exception to this: is was written to provide a natural way to express linear algebra. However, things get a bit tricky when you mix matrices and arrays, and even when sticking with matrices there are confusions and limitations -- how do you express a row vs a column vector? what do you get when you iterate over a matrix? etc.
There has been a bunch of discussion about these issues, a lot of good ideas, a little bit of consensus about how to improve it, but no one with the skill to do it has enough motivation to do it.
I recently got motivated to get better linear algebra for Python; and startet submitting and writing on patches for Sage instead (which of course uses NumPy underneath). Sage has a strong concept of matrices and vectors, but not much numerical support, mainly exact or multi-precision arithmetic. So perhaps there will be more progress there; I'm not sure yet how far it will get or if anybody will join me in doing it... To me that seems like the ideal way to split up code -- let NumPy/SciPy deal with the array-oriented world and Sage the closer-to-mathematics notation. I never liked the NumPy matrix class. I think this is mainly because my matrices are often, but not always, diagonal, which doesn't fit at all into NumPy's way of thinking about these things. (Also a 2D or 3D array could easily be a "vector", like if you want to linearily transform the values of the pixels in an image. So I think any Python linear algebra package has to attack things in a totally different way from numpy.matrix). Dag Sverre
I think the "bottom line" is: _only_ use the matrix class if _all_ you're doing is matrix algebra - which, as Chris Barker said, is (likely) the exception, not the rule, for most numpy users. I feel confident in saying this (that is, _only_ ... _all_) because if you feel you really must have a matrix (which I think should never really be the case: all the operations of matrix algebra can be done w/ arrays, it's just that some look a little more elegant if the operands are matrices) you can always cast a two-d array (or a one-d array, but then you have to be careful about whether you're casting to a row vector or a column vector) to a matrix - A = np.matrix(np.array(a)) - "on the fly," so to speak. That said, I'll be the first to acknowledge that those coming to array programming after having come up through a pure math curriculum - where "array" is essentially synonymous with "matrix," tensors rarely being written out in all their gorey component glory - are confronted with a perhaps surprising adjustment. Since no one has yet provided an explicit example of, IMO, the most fundamental difference between a 2-D numpy array and a numpy matrix, observe:
a = np.array([[1, 2], [3, 4]]) a array([[1, 2], [3, 4]]) A = np.matrix(a) A matrix([[1, 2], [3, 4]]) a*a # multiplication is performed "element by element" array([[ 1, 4], [ 9, 16]]) A*A # standard matrix multiplication is performed matrix([[ 7, 10], [15, 22]])
In other words, the most fundamental difference (not the only difference, but the one which pretty much characterizes all the others) is the way the multiplication operator is overloaded: array multiplication is "element by element," whereas matrix multiplication is, well, matrix multiplication; oh, and the fact that type is preserved, i.e., the type of an array times an array is an array, the type of a matrix times a matrix is a matrix. (But be careful:
A*a matrix([[ 7, 10], [15, 22]]) a*A matrix([[ 7, 10], [15, 22]]))
i.e., multiplication of a matrix by an array is allowed, and regardless of order, the array operand is cast to a matrix, resulting in matrix multiplication and a matrix-type result.) HTH, DG
Dag Sverre Seljebotn wrote:
I recently got motivated to get better linear algebra for Python;
wonderful!
To me that seems like the ideal way to split up code -- let NumPy/SciPy deal with the array-oriented world and Sage the closer-to-mathematics notation.
well, maybe -- but there is a lot of call for pure-computational linear algebra. I do hope you'll consider building the computational portion of it in a way that might be included in numpy or scipy by itself in the future. Have you read this lengthy thread? and these summary wikipages: http://scipy.org/NewMatrixSpec http://www.scipy.org/MatrixIndexing Though it sounds a bit like you are going your own way with it anyway. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On Mon, Dec 21, 2009 at 9:57 AM, Christopher Barker
Dag Sverre Seljebotn wrote:
I recently got motivated to get better linear algebra for Python;
wonderful!
To me that seems like the ideal way to split up code -- let NumPy/SciPy deal with the array-oriented world and Sage the closer-to-mathematics notation.
well, maybe -- but there is a lot of call for pure-computational linear algebra. I do hope you'll consider building the computational portion of it in a way that might be included in numpy or scipy by itself in the future.
My personal opinion is that the LA status quo is acceptably good: there's maybe a bit of an adjustment to make for newbies, but I don't see it as a very big one, and this list strikes me as very efficient at getting people over little bumps (e.g., someone emails in: "how do you matrix-multiply two arrays?" within minutes (:-)) Robert or Charles replies with "np.dot: np.dot([[1,2],[3,4]],[[1,2],[3,4]]) = array([[7,10],[15,22]])"). Certainly any significant changes to the base should need to run the gauntlet of an NEP process. DG
2009/12/21 David Goldsmith
On Mon, Dec 21, 2009 at 9:57 AM, Christopher Barker
wrote: Dag Sverre Seljebotn wrote:
I recently got motivated to get better linear algebra for Python;
wonderful!
To me that seems like the ideal way to split up code -- let NumPy/SciPy deal with the array-oriented world and Sage the closer-to-mathematics notation.
well, maybe -- but there is a lot of call for pure-computational linear algebra. I do hope you'll consider building the computational portion of it in a way that might be included in numpy or scipy by itself in the future.
My personal opinion is that the LA status quo is acceptably good: there's maybe a bit of an adjustment to make for newbies, but I don't see it as a very big one, and this list strikes me as very efficient at getting people over little bumps (e.g., someone emails in: "how do you matrix-multiply two arrays?" within minutes (:-)) Robert or Charles replies with "np.dot: np.dot([[1,2],[3,4]],[[1,2],[3,4]]) = array([[7,10],[15,22]])"). Certainly any significant changes to the base should need to run the gauntlet of an NEP process.
I think we have one major lacuna: vectorized linear algebra. If I have to solve a whole whack of four-dimensional linear systems, right now I need to either write a python loop and use linear algebra on them one by one, or implement my own linear algebra. It's a frustrating lacuna, because all the machinery is there: generalized ufuncs and LAPACK wrappers. Somebody just needs to glue them together. I've even tried making a start on it, but numpy's ufunc machinery and generic type system is just too much of a pain for me to make any progress as is. I think if someone wanted to start building a low-level generalized ufunc library interface to LAPACK, that would be a big improvement in numpy/scipy's linear algebra. Pretty much everything else strikes me as a question of notation. (Not to trivialize it: good notation makes a tremendous difference.) Anne
Christopher Barker wrote:
Dag Sverre Seljebotn wrote:
I recently got motivated to get better linear algebra for Python;
wonderful!
To me that seems like the ideal way to split up code -- let NumPy/SciPy deal with the array-oriented world and Sage the closer-to-mathematics notation.
well, maybe -- but there is a lot of call for pure-computational linear algebra. I do hope you'll consider building the computational portion of it in a way that might be included in numpy or scipy by itself in the future.
Have you read this lengthy thread?
and these summary wikipages:
http://scipy.org/NewMatrixSpec http://www.scipy.org/MatrixIndexing
Though it sounds a bit like you are going your own way with it anyway.
Yes, I'm going my own way with it -- the SciPy matrix discussion tends to focus on cosmetics IMO, and I just tend to fundamentally disagree with the direction these discussions take on the SciPy/NumPy lists. What I'm after is not just some cosmetics for avoiding a call to dot. I'm after something which will allow me to structure my programs better -- something which e.g. allows my sampling routines to not care (by default, rather than as a workaround) about whether the specified covariance matrix is sparse or dense when trying to Cholesky decompose it, or something which allows one to set the best iterative solver to use for a given matrix at an outer level in the program, but do the actual solving somewhere else, without all the boilerplate and all the variable passing and callbacks. -- Dag Sverre
On Mon, Dec 21, 2009 at 1:31 PM, Dag Sverre Seljebotn
Yes, I'm going my own way with it -- the SciPy matrix discussion tends to focus on cosmetics IMO, and I just tend to fundamentally disagree with the direction these discussions take on the SciPy/NumPy lists. What I'm after is not just some cosmetics for avoiding a call to dot.
I'm after something which will allow me to structure my programs better -- something which e.g. allows my sampling routines to not care (by default, rather than as a workaround) about whether the specified covariance matrix is sparse or dense when trying to Cholesky decompose it, or something which allows one to set the best iterative solver to use for a given matrix at an outer level in the program, but do the actual solving somewhere else, without all the boilerplate and all the variable passing and callbacks.
-- Dag Sverre
OK, it sounds like these sorts of things might be "universally" useful! :-) Keep us apprised, please. DG
Christopher Barker wrote:
Dag Sverre Seljebotn wrote:
I recently got motivated to get better linear algebra for Python;
wonderful!
To me that seems like the ideal way to split up code -- let NumPy/SciPy deal with the array-oriented world and Sage the closer-to-mathematics notation.
well, maybe -- but there is a lot of call for pure-computational linear algebra. I do hope you'll consider building the computational portion of it in a way that might be included in numpy or scipy by itself in the future.
This is readily done -- there is no computational portion except for what is in NumPy/Scipy or scikits, and I intend for it to remain that way. It's just another interface, really. (What kind of computations were you thinking about?) -- Dag Sverre
Dag Sverre Seljebotn wrote:
This is readily done -- there is no computational portion except for what is in NumPy/Scipy or scikits, and I intend for it to remain that way. It's just another interface, really.
(What kind of computations were you thinking about?)
Nothing in particular -- just computational as opposed to symbolic manipulation. It sounds like you've got some good ideas. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
Christopher Barker wrote:
Dag Sverre Seljebotn wrote:
This is readily done -- there is no computational portion except for what is in NumPy/Scipy or scikits, and I intend for it to remain that way. It's just another interface, really.
(What kind of computations were you thinking about?)
Nothing in particular -- just computational as opposed to symbolic manipulation.
OK. As a digression, I think it is easy to get the wrong impression of Sage that it is for "symbolics" vs. "computations". The reality is that the symbolics has been one of the *weaker* aspects of Sage (though steadily improving) -- the strong aspect is computations, but with elements that NumPy doesn't handle efficiently: Arbitrary size integer and rationals, polynomials (or vectors of their coefficients if you wish -- just numbers, not symbols), and so on. So the Sage design is very much about computation, it is just that the standard floating point hasn't got all that much attention. -- Dag Sverre
On Tue, Dec 22, 2009 at 1:06 AM, Dag Sverre Seljebotn
OK. As a digression, I think it is easy to get the wrong impression of Sage that it is for "symbolics" vs. "computations". The reality is that the symbolics has been one of the *weaker* aspects of Sage (though steadily improving) -- the strong aspect is computations, but with elements that NumPy doesn't handle efficiently: Arbitrary size integer and rationals, polynomials (or vectors of their coefficients if you wish -- just numbers, not symbols), and so on.
So the Sage design is very much about computation, it is just that the standard floating point hasn't got all that much attention.
Good to know, Dag, thanks for the "digression." :-) DG
On Sat, Dec 19, 2009 at 9:45 AM, Wayne Watson
Dag Sverre Seljebotn wrote:
Wayne Watson wrote:
I'm trying to compute the angle between two vectors in three dimensional space. For that, I need to use the "scalar (dot) product" , according to a calculus book (quoting the book) I'm holding in my hands right now. I've used dot() successfully to produce the necessary angle. My program works just fine.
In the case of the dot(function), one must use np.dev(x.T,x), where x is 1x3.
I'm not quite sure what your point is about dot()* unless you are thinking in some non-Euclidean fashion. One can form np.dot(a,b) with a and b arrays of 3x4 and 4x2 shape to arrive at a 3x2 array. That's definitely not a scalar. Is there a need for this sort of calculation in non-Euclidean geometry, which I have never dealt with?
There's a difference between 1D and 2D arrays that's important here. For a 1D array, np.dot(x.T, x) == np.dot(x, x), since there's only one dimension.
A 4x1, 1x7, and 1x5 would be examples of a 1D array or matrix, right?
No, they are all 2D. All matrices are 2D. An array is 1D if it doesn't have a second dimension, which might be confusing if you have only seen vectors represented as arrays. To see the number of dimensions in a numpy array, use shape: In [1]: array([[1,2],[3,4]]) Out[1]: array([[1, 2], [3, 4]]) In [2]: array([[1,2],[3,4]]).shape Out[2]: (2, 2) In [3]: array([1,2, 3,4]) Out[3]: array([1, 2, 3, 4]) In [4]: array([1,2, 3,4]).shape Out[4]: (4,)
Are you saying that instead of using a rotational matrix like theta = 5.0 # degrees m1 = matrix([[2] ,[5]]) rotCW = matrix([ [cosD(theta), sinD(theta)], [-sinD(theta), cosD(theta)] ]) m2= rotCW*m1 m1=np.array(m1) m2=np.array(m2) that I should use a 2-D array for rotCW? So why does numpy have a matrix class? Is the class only used when working with matplotlib?
Numpy has a matrix class because python lacks operators, so where * normally means element-wise multiplication the matrix class uses it for matrix multiplication, which is different. Having a short form for matrix multiplication is sometimes a convenience and also more familiar for folks coming to numpy from matlab.
To get the scalar value (sum of squares) I had to use a transpose, T, on one argument.
That is if the argument is 2D. It's not strictly speaking a scalar product, but we won't go into that here ;) <snip> Chuck
OK, so what's your recommendation on the code I wrote? Use shape 0xN? Will that eliminate the need for T? I'll go back to Tenative Python, and re-read dimension, shape and the like. Charles R Harris wrote:
On Sat, Dec 19, 2009 at 9:45 AM, Wayne Watson
mailto:sierra_mtnview@sbcglobal.net> wrote: Dag Sverre Seljebotn wrote: > Wayne Watson wrote: > >> I'm trying to compute the angle between two vectors in three dimensional >> space. For that, I need to use the "scalar (dot) product" , according to >> a calculus book (quoting the book) I'm holding in my hands right now. >> I've used dot() successfully to produce the necessary angle. My program >> works just fine. >> >> In the case of the dot(function), one must use np.dev(x.T,x), where x is >> 1x3. >> >> I'm not quite sure what your point is about dot()* unless you are >> thinking in some non-Euclidean fashion. One can form np.dot(a,b) with a >> and b arrays of 3x4 and 4x2 shape to arrive at a 3x2 array. That's >> definitely not a scalar. Is there a need for this sort of calculation in >> non-Euclidean geometry, which I have never dealt with? >> > > There's a difference between 1D and 2D arrays that's important here. For > a 1D array, np.dot(x.T, x) == np.dot(x, x), since there's only one > dimension. > A 4x1, 1x7, and 1x5 would be examples of a 1D array or matrix, right?
No, they are all 2D. All matrices are 2D. An array is 1D if it doesn't have a second dimension, which might be confusing if you have only seen vectors represented as arrays. To see the number of dimensions in a numpy array, use shape:
In [1]: array([[1,2],[3,4]]) Out[1]: array([[1, 2], [3, 4]])
In [2]: array([[1,2],[3,4]]).shape Out[2]: (2, 2)
In [3]: array([1,2, 3,4]) Out[3]: array([1, 2, 3, 4])
In [4]: array([1,2, 3,4]).shape Out[4]: (4,)
Are you saying that instead of using a rotational matrix like theta = 5.0 # degrees m1 = matrix([[2] ,[5]]) rotCW = matrix([ [cosD(theta), sinD(theta)], [-sinD(theta), cosD(theta)] ]) m2= rotCW*m1 m1=np.array(m1) m2=np.array(m2) that I should use a 2-D array for rotCW? So why does numpy have a matrix class? Is the class only used when working with matplotlib?
Numpy has a matrix class because python lacks operators, so where * normally means element-wise multiplication the matrix class uses it for matrix multiplication, which is different. Having a short form for matrix multiplication is sometimes a convenience and also more familiar for folks coming to numpy from matlab.
To get the scalar value (sum of squares) I had to use a transpose, T, on one argument.
That is if the argument is 2D. It's not strictly speaking a scalar product, but we won't go into that here ;)
<snip>
Chuck
------------------------------------------------------------------------
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"... humans'innate skills with numbers isn't much
better than that of rats and dolphins."
-- Stanislas Dehaene, neurosurgeon
Web Page:
On 12/18/2009 5:54 PM, Keith Goodman wrote:
On Fri, Dec 18, 2009 at 2:51 PM, Wayne Watson
wrote: That should do it. Thanks. How do I get the scalar result by itself?
np.dot(x.T,x)[0,0] 14
or
x = np.array([1,2,3]) np.dot(x,x) 14
or np.dot(x.flat,x.flat) fwiw, Alan Isaac
participants (8)
-
Alan G Isaac
-
Anne Archibald
-
Charles R Harris
-
Christopher Barker
-
Dag Sverre Seljebotn
-
David Goldsmith
-
Keith Goodman
-
Wayne Watson