This code -- adj = [ [eval(y) for y in x.split()] for x in infile ] val,vec = numpy.linalg.eig(adj) master = zip( val, vec.transpose() ) master.sort() *sometimes* gives this error: Traceback (most recent call last): File "3work.py", line 14, in <module> master.sort() ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() What does sort() care about truth values?! numpy.linalg.eig() sometimes returns complex values, which may not have a natural ordering, but I get this error (sometimes) even when it returns reals. -- Anton Sherwood, http://www.ogre.nu/ "How'd ya like to climb this high *without* no mountain?" --Porky Pine
Anton Sherwood wrote:
This code --
adj = [ [eval(y) for y in x.split()] for x in infile ] val,vec = numpy.linalg.eig(adj) master = zip( val, vec.transpose() ) master.sort()
*sometimes* gives this error:
Traceback (most recent call last): File "3work.py", line 14, in <module> master.sort() ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
What does sort() care about truth values?!
The sort method on lists basically uses cmp(x,y) to determine sort order. In this case, I suspect you are getting errors whenever you have equal-valued eigenvalues so the comparison has to go to the second item in the tuple (which is the array). cmp(x,y) must return -1, 0, or 1 which doesn't work on arrays with more than 1 element because it is ambiguous. Thus you get this error. The operation is undefined. What do you actually want to do when you have equal-valued eigenvalues? -Travis
Travis Oliphant wrote:
cmp(x,y) must return -1, 0, or 1 which doesn't work on arrays with more than 1 element because it is ambiguous. Thus you get this error.
Ah. Since lists *can* be compared, I assumed arrays would inherit that property.
The operation is undefined. What do you actually want to do when you have equal-valued eigenvalues?
Don't care. All I really need is the four highest eigenvalues and their vectors. I'll have a look in "Python Cookbook" to see if there's a more efficient way to do that partial sort. -- Anton Sherwood, http://www.ogre.nu/
On 4/26/07, Anton Sherwood <anton.sherwood@gmail.com> wrote:
Travis Oliphant wrote:
cmp(x,y) must return -1, 0, or 1 which doesn't work on arrays with more than 1 element because it is ambiguous. Thus you get this error.
Ah. Since lists *can* be compared, I assumed arrays would inherit that property.
The operation is undefined. What do you actually want to do when you have equal-valued eigenvalues?
Don't care.
All I really need is the four highest eigenvalues and their vectors. I'll have a look in "Python Cookbook" to see if there's a more efficient way to do that partial sort.
I don't think there are any partial sorts available. I think the best approach in this case an argsort on the eigenvalues, followed by a take using the last four resulting indexes would be the way to go. Chuck
On 4/26/07, Anton Sherwood <anton.sherwood@gmail.com> wrote:
All I really need is the four highest eigenvalues and their vectors. I'll have a look in "Python Cookbook" to see if there's a more efficient way to do that partial sort.
Maybe "heapq.nlargest()" is what you want? http://www.python.org/doc/2.4/lib/module-heapq.html
tan2 wrote:
On 4/26/07, * Anton Sherwood* <anton.sherwood@gmail.com <mailto:anton.sherwood@gmail.com>> wrote:
All I really need is the four highest eigenvalues and their vectors. I'll have a look in "Python Cookbook" to see if there's a more efficient way to do that partial sort.
Maybe "heapq.nlargest ()" is what you want? http://www.python.org/doc/2.4/lib/module-heapq.html
Just doing argsort() on the whole array is faster (up until about 1e6 elements) because it does everything in C whereas heapq will create a lot of Python objects because it is treating the array as a general Python container. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
Just doing argsort() on the whole array is faster (up until about 1e6 elements) because it does everything in C whereas heapq will create a lot of Python objects because it is treating the array as a general Python container.
That's a good point. I wasn't thinking about the efficiency issue.
On 26/04/07, Anton Sherwood <anton.sherwood@gmail.com> wrote:
All I really need is the four highest eigenvalues and their vectors.
It's not really very efficient, but sorting time is going to be tiny compared to however you get your eigenvalues, so you could argsort the eigenvalue array and use the result to sort the eigenvector array. You can also accelerate the process by quickly checking whether the eigenvalue array is already sorted - usually it is. Anne
Anton Sherwood wrote:
Travis Oliphant wrote:
cmp(x,y) must return -1, 0, or 1 which doesn't work on arrays with more than 1 element because it is ambiguous. Thus you get this error.
Ah. Since lists *can* be compared, I assumed arrays would inherit that property.
The operation is undefined. What do you actually want to do when you have equal-valued eigenvalues?
Don't care.
One approach is to use argsort to create an index list of sorted eigenvalues and then sort the eig and eigvector arrays before zipping them together in a list of tuples. eig, val = numpy.linalg.eig(a) indx = eig.argsort() eig = eig.take(indx) val = val.take(indx, axis=1) master = zip(eig, val.T) Will do what you want. -Travis
Travis Oliphant wrote:
One approach is to use argsort to create an index list of sorted eigenvalues and then sort the eig and eigvector arrays before zipping them together in a list of tuples.
eig, val = numpy.linalg.eig(a)
indx = eig.argsort() eig = eig.take(indx) val = val.take(indx, axis=1) master = zip(eig, val.T)
Thank you, that worked. http://www.ogre.nu/wp/?p=1978 I refined it slightly: val,vec = numpy.linalg.eig(adj) indx = val.argsort()[-4:-1] val = val.take(indx) vec = vec.take(indx, axis=1) master = zip(val, vec.T) -- Anton Sherwood, http://www.ogre.nu/ "How'd ya like to climb this high *without* no mountain?" --Porky Pine
On 4/28/07, Anton Sherwood <bronto@pobox.com> wrote:
Travis Oliphant wrote:
One approach is to use argsort to create an index list of sorted eigenvalues and then sort the eig and eigvector arrays before zipping them together in a list of tuples.
eig, val = numpy.linalg.eig(a)
indx = eig.argsort() eig = eig.take(indx) val = val.take(indx, axis=1) master = zip(eig, val.T)
Thank you, that worked. http://www.ogre.nu/wp/?p=1978
I refined it slightly:
val,vec = numpy.linalg.eig(adj) indx = val.argsort()[-4:-1] val = val.take(indx) vec = vec.take(indx, axis=1) master = zip(val, vec.T)
But that won't get the 4 largest, and will ignore the last eigenvalue, whatever it is. If you want the four largest, do val,vec = numpy.linalg.eig(adj) ind = val.argsort() val = val.take(ind[-4:]) vec = vec.take(ind[-4:], axis=1) master = zip(val, vec.T) Chuck
Anton Sherwood wrote: I refined it slightly:
val,vec = numpy.linalg.eig(adj) indx = val.argsort()[-4:-1] val = val.take(indx) vec = vec.take(indx, axis=1) master = zip(val, vec.T)
Charles R Harris wrote:
But that won't get the 4 largest, and will ignore the last eigenvalue, whatever it is. . . .
Are you making two points here or only one? I'm using eigenvectors of a graph's adjacency matrix as "topological" coordinates of the graph's vertices as embedded in 3space (something I learned about just recently). Whenever I've done this with a graph that *does* have a good 3d embedding, using the first eigenvector results in a flat model: apparently the first is not independent, at least in such cases. So, yeah, I really did intend to throw away the largest and use the next three. Are you saying this code doesn't give me that? The output of my first few test cases looks good. -- Anton Sherwood, http://www.ogre.nu/ "How'd ya like to climb this high *without* no mountain?" --Porky Pine
Anton Sherwood wrote:
Anton Sherwood wrote: I refined it slightly:
val,vec = numpy.linalg.eig(adj) indx = val.argsort()[-4:-1] val = val.take(indx) vec = vec.take(indx, axis=1) master = zip(val, vec.T)
Charles R Harris wrote:
But that won't get the 4 largest, and will ignore the last eigenvalue, whatever it is. . . .
Are you making two points here or only one?
Only one.
I'm using eigenvectors of a graph's adjacency matrix as "topological" coordinates of the graph's vertices as embedded in 3space (something I learned about just recently). Whenever I've done this with a graph that *does* have a good 3d embedding, using the first eigenvector results in a flat model: apparently the first is not independent, at least in such cases. So, yeah, I really did intend to throw away the largest and use the next three. Are you saying this code doesn't give me that? The output of my first few test cases looks good.
No, it's just that in previous emails you had stated that you wanted the four largest eigenvalues and said nothing about actually wanting to throw away the largest of the four. Sometimes people get confused with Python's (and thus numpy's) indexing, so Charles wanted to make sure you knew what [-4:-1] actually did. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
On 4/29/07, Anton Sherwood <bronto@pobox.com> wrote:
Anton Sherwood wrote: I refined it slightly:
val,vec = numpy.linalg.eig(adj) indx = val.argsort()[-4:-1] val = val.take(indx) vec = vec.take(indx, axis=1) master = zip(val, vec.T)
Charles R Harris wrote:
But that won't get the 4 largest, and will ignore the last eigenvalue, whatever it is. . . .
Are you making two points here or only one?
I'm using eigenvectors of a graph's adjacency matrix as "topological" coordinates of the graph's vertices as embedded in 3space (something I learned about just recently). Whenever I've done this with a graph that *does* have a good 3d embedding, using the first eigenvector results in a flat model: apparently the first is not independent, at least in such cases. So, yeah, I really did intend to throw away the largest and use the next three. Are you saying this code doesn't give me that? The output of my first few test cases looks good.
Looks good, then. In your original post you were talked about the four largest eigenvalues. Anyway, the embedding part sounds interesting, I'll have to think about why that works. Chuck
Anton Sherwood wrote:
I'm using eigenvectors of a graph's adjacency matrix as "topological" coordinates of the graph's vertices as embedded in 3space (something I learned about just recently). Whenever I've done this with a graph that *does* have a good 3d embedding, using the first eigenvector results in a flat model: apparently the first is not independent, at least in such cases. . . .
Charles R Harris wrote:
. . . the embedding part sounds interesting, I'll have to think about why that works.
It's a mystery to me: I never did study enough matrix algebra to get a feel for eigenvectors (indeed this is the first time I've had anything to do with them). I'll happily share my code with anyone who wants to experiment with it. -- Anton Sherwood, http://www.ogre.nu/
2007/4/29, Anton Sherwood <anton.sherwood@gmail.com>:
I'm using eigenvectors of a graph's adjacency matrix as "topological" coordinates of the graph's vertices as embedded in 3space (something I learned about just recently). Whenever I've done this with a graph
Anton Sherwood wrote: that
*does* have a good 3d embedding, using the first eigenvector results in a flat model: apparently the first is not independent, at least in such cases. . . .
Charles R Harris wrote:
. . . the embedding part sounds interesting, I'll have to think about why that works.
It's a mystery to me: I never did study enough matrix algebra to get a feel for eigenvectors (indeed this is the first time I've had anything to do with them).
I'll happily share my code with anyone who wants to experiment with it.
Seems to me that this is much like Isomap and class multidimensional scaling, no ? Matthieu
Sorry for joining this discussion late. If you are only interested in the four largest eigenvalues, there are more efficient algorithms out there than just eig(). There are algorithms that just give you the N largest. Then again, I don't know of any Python implementations, but I haven't looked, Mark On Apr 29, 11:04 pm, "Matthieu Brucher" <matthieu.bruc...@gmail.com> wrote:
2007/4/29, Anton Sherwood <anton.sherw...@gmail.com>:
I'm using eigenvectors of a graph's adjacency matrix as "topological" coordinates of the graph's vertices as embedded in 3space (something I learned about just recently). Whenever I've done this with a graph
Anton Sherwood wrote: that
*does* have a good 3d embedding, using the first eigenvector results in a flat model: apparently the first is not independent, at least in such cases. . . .
Charles R Harris wrote:
. . . the embedding part sounds interesting, I'll have to think about why that works.
It's a mystery to me: I never did study enough matrix algebra to get a feel for eigenvectors (indeed this is the first time I've had anything to do with them).
I'll happily share my code with anyone who wants to experiment with it.
Seems to me that this is much like Isomap and class multidimensional scaling, no ?
Matthieu
_______________________________________________ Numpy-discussion mailing list Numpy-discuss...@scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion
mark wrote:
Sorry for joining this discussion late. If you are only interested in the four largest eigenvalues, there are more efficient algorithms out there than just eig(). There are algorithms that just give you the N largest. Then again, I don't know of any Python implementations, but I haven't looked, Mark
On Apr 29, 11:04 pm, "Matthieu Brucher" <matthieu.bruc...@gmail.com> wrote:
2007/4/29, Anton Sherwood <anton.sherw...@gmail.com>:
Anton Sherwood wrote:
I'm using eigenvectors of a graph's adjacency matrix as "topological" coordinates of the graph's vertices as embedded in 3space (something I learned about just recently). Whenever I've done this with a graph
that
*does* have a good 3d embedding, using the first eigenvector results
in
a flat model: apparently the first is not independent, at least in
such
cases. . . .
Charles R Harris wrote:
. . . the embedding part sounds interesting, I'll have to think about why that works.
It's a mystery to me: I never did study enough matrix algebra to get a feel for eigenvectors (indeed this is the first time I've had anything to do with them).
I'll happily share my code with anyone who wants to experiment with it.
Seems to me that this is much like Isomap and class multidimensional scaling, no ?
Matthieu
_______________________________________________ Numpy-discussion mailing list Numpy-discuss...@scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
There are several subroutines in LAPACK for this task. http://www.netlib.org/lapack/double/dsyevx.f http://www.netlib.org/lapack/double/dsygvx.f IIRC symeig provides a wrapper. See http://mdp-toolkit.sourceforge.net/symeig.html Nils
participants (10)
-
Anne Archibald
-
Anton Sherwood
-
Anton Sherwood
-
Charles R Harris
-
mark
-
Matthieu Brucher
-
Nils Wagner
-
Robert Kern
-
tan2
-
Travis Oliphant