non normalised eigenvectors
Dear all, I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy. Thanks LP
Could you elaborate a bit more about what you mean with original
eigenvectors? They denote the direction hence you can scale them to any
size anyways.
On Sat, Feb 25, 2023 at 5:38 PM
Dear all,
I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy.
Thanks
LP _______________________________________________ NumPyDiscussion mailing list  numpydiscussion@python.org To unsubscribe send an email to numpydiscussionleave@python.org https://mail.python.org/mailman3/lists/numpydiscussion.python.org/ Member address: ilhanpolat@gmail.com
Thank you for the reply. I am working with the Laplacian matrix of a graph which is the Degree matrix minus the adjacency matrix.
The Laplacian is a symmetric matrix and the smallest eigenvalue is zero. As the rows add it to 0, Lx=0x, and 1 is the resulting vector. The normalized eigenvector is the 1 vector divided by the norm. So if a have 10 vertices of the graph the normalized eigenvector is 1/sqrt(10). I do understand that a scale * normalized eigenvector is also a solution but for the purpose of my research I need the normalized eigenvector * norm. For the 0 eigenvalue the norm of the eigenvector is easy to figure out but not for the other eigenvalues.
That is what I meant by the original eigenvector and sorry for the confusion the confusion. Most eigenvalues/eigenvalues calculators will give you 1 for first eigenvector
Best
Louis Petingi
Professor of Computer Science
College of Staten Island
City University of NY
________________________________
From: Ilhan Polat
On Sat, Feb 25, 2023 at 2:11 PM Louis Petingi
Thank you for the reply. I am working with the Laplacian matrix of a graph which is the Degree matrix minus the adjacency matrix. The Laplacian is a symmetric matrix and the smallest eigenvalue is zero. As the rows add it to 0, Lx=0x, and 1 is the resulting vector. The normalized eigenvector is the 1 vector divided by the norm. So if a have 10 vertices of the graph the normalized eigenvector is 1/sqrt(10). I do understand that a scale * normalized eigenvector is also a solution but for the purpose of my research I need the normalized eigenvector * norm.
I apologize, but I'm going to harp on this: you are using the word "the" as if there is one unique magnitude that we could report. There is none. If you have a use for a specific convention for the magnitudes, you'll have to point us to the literature that talks about it, and we might be able to give you pointers as to how to get the magnitudes that your research requires. For the 0 eigenvalue the norm of the eigenvector is easy to figure out but
not for the other eigenvalues.
That is what I meant by the original eigenvector and sorry for the confusion the confusion. Most eigenvalues/eigenvalues calculators will give you 1 for first eigenvector
I'm afraid that doesn't really narrow anything down for us.  Robert Kern
Hi Thanks
Very simply one of the solutions for the zero eigenvalue is the 1 eigenvector. If I get back this 1 vector, for the 0 eigenvalue then the other eigenvectors will be in the right format I am looking for. Once again, the 1 vector is the normalized eigenvector * norm.
Best
Louis Petingi
Get Outlook for iOShttps://aka.ms/o0ukef
________________________________
From: Robert Kern
On Sat, Feb 25, 2023 at 4:11 PM Louis Petingi
Hi Thanks
Very simply one of the solutions for the zero eigenvalue is the 1 eigenvector. If I get back this 1 vector, for the 0 eigenvalue then the other eigenvectors will be in the right format I am looking for. Once again, the 1 vector is the normalized eigenvector * norm.
There are multiple rules that could get this result. You could multiply all of the eigenvectors by `sqrt(n)`. Alternately, you could multiply each eigenvector `v` by `1/v[np.nonzero(v)[0][0]]` (i.e. the first nonzero value in the eigenvector). Both would give you the desired result for the 0eigenvalue eigenvector, but different results for every other eigenvector. Both are simple and coherent rules, but quite likely neither are what you are looking for. Remember, each eigenvector can be scaled independently of any other, so establishing the desired result for one eigenvector does nothing to constrain any other. If you want help finding the rule that will help you in your research, you'll have to let us know what you are going to use the magnitudes _for_. The information you've given so far (where the eigenvectors come _from_) doesn't actually help narrow anything down. The only application of eigenvector magnitudes of graph Laplacians that I am aware of is the Fiedler vector, and that actually requires unit eigenvectors.  Robert Kern
As you mentioned this is a generalization of the Fiedler eigenvector. When applying spectral clustering, and you want to find the two clusters
then the Fiedler eigenvector tells you how to partition the vertices (bipartition) so the normalized cut is minimized. The concept can be generalized to
k clusters by applying Kmean to the first k eigenvectors. The two partitions can be determined by grouping the vertices of the corresponding negative values of the eigenvector
for one cluster and the other cluster are vertices corresponding to the nonnegative values.
I can use perhaps the normalized eigenvectors, but I am not sure if this is correct thus, I prefer using the magnitudes. In theory it may work as for example the entries of the Fielder eigenvector are divided by the norm.
The eigenvalues and eigenvector calculators (in internet) find the magnitude of the vector.
The project is aimed to show that spectral clustering works for images that are graphs where the pizels are the vertices.
We are having some issues with Kmean but if we get the Fieldler eigenvector we can test how Kmean works for two partitions.
Best
Louis
________________________________
From: Robert Kern
On Sat, Feb 25, 2023 at 5:33 PM Louis Petingi
As you mentioned this is a generalization of the Fiedler eigenvector. When applying spectral clustering, and you want to find the two clusters then the Fiedler eigenvector tells you how to partition the vertices (bipartition) so the normalized cut is minimized. The concept can be generalized to k clusters by applying Kmean to the first k eigenvectors. The two partitions can be determined by grouping the vertices of the corresponding negative values of the eigenvector for one cluster and the other cluster are vertices corresponding to the nonnegative values.
I can use perhaps the normalized eigenvectors, but I am not sure if this is correct thus, I prefer using the magnitudes. In theory it may work as for example the entries of the Fielder eigenvector are divided by the norm.
My understanding of this family of techniques is that the Kmeans step is basically heuristic (see section 8.4 of [1]). You have a lot of choices here, and playing around with the magnitudes might be among them. But there is nothing that comes _from_ the eigenvector calculation that can help determine what this should be. There's nothing special about the way that `np.linalg.eig()` computes eigenvectors before normalization that will improve the Kmeans step. Rather, you have the freedom to scale the eigenvectors freely to help out the Kmeans calculation (or any other clustering that you might want to do there). I suspect that the unitnormalization is probably one of the better ones. [1] http://people.csail.mit.edu/dsontag/courses/ml14/notes/Luxburg07_tutorial_sp...
The eigenvalues and eigenvector calculators (in internet) find the magnitude of the vector.
No, they do not find "the magnitude" because there is no such thing. They just give _an_ arbitrary magnitude that happened to be the raw output of their _particular_ algorithm without the following unitnormalization step. There are lots of algorithms to compute eigenvectors. The magnitudes of the eigenvector solutions before normalization are not all the same for all of the different algorithms. There is absolutely no guarantee (and very little probability) that if we modified `np.linalg.eig()` to not apply normalization that the unnormalized eigenvectors would match the unnormalized eigenvectors of any other implementation.  Robert Kern
Hi Robert
I read somewhere that we can use the unit vector times a scalar for the Friedler eigenvector. Thus, the question is that for the first keigenvectors do we multiply the corresponding unit vectors them by the same scalar? That said my feeling is that when applying kmean on the first k eigenvectors will give you different clusters as when you multiply the unit eigenvectors by different scalars the kdimensional coordinates will be different and the distances between the points will also change.
We could try starting with the unit vectors and multiply by different scalars and see what we get. As you mentioned kmean is a heuristic and perhaps there is a graphtheoretical approach to find the k clusters by generalizing Fiedler result.
Thanks, and I'll let you know
Louis
________________________________
From: Louis Petingi
Hi Robert
Just a follow up. I was able (my student) to get the 1 vector from the 0 eigenvector. Even though the normalized or this set of eigenvectors will work we could try the two sets. Not sure if multiplying the unit vectors by a scalar is the correct approach.
The paper that you sent me I applied the first algorithm using the unnormalized Laplacian (first algorithm of the three shown on page 6) when I taught this course at CUNY's Graduate Center.
This is how we got the 1 vector.
[cid:9ce8922f1f1c489992098df61e50a2b9]
Thanks
________________________________
From: Louis Petingi
On Sat, Feb 25, 2023 at 8:17 PM Louis Petingi
Hi Robert
Just a follow up. I was able (my student) to get the 1 vector from the 0 eigenvector. Even though the normalized or this set of eigenvectors will work we could try the two sets. Not sure if multiplying the unit vectors by a scalar is the correct approach.
Yeah, that essentially amounts to multiplying all of the eigenvectors by `np.sqrt(n)`. I don't imagine it will do much to change the results of Kmeans or other clustering algorithm unless if it has absolute distance thresholds embedded in it. But it's absolutely a free choice for you to precondition things for your clustering algorithm.  Robert Kern
On Sat, 25 Feb 2023 at 20:09, Louis Petingi
Thank you for the reply. I am working with the Laplacian matrix of a graph which is the Degree matrix minus the adjacency matrix. The Laplacian is a symmetric matrix and the smallest eigenvalue is zero. As the rows add it to 0, Lx=0x, and 1 is the resulting vector.
The ones vector is one solution, but so is all 2, or any other vector where all the entries are equal. There is nothing in the eigenvector that makes one more "worthy" than others, except for in particular applications.
That is what I meant by the original eigenvector and sorry for the confusion the confusion. Most eigenvalues/eigenvalues calculators will give you 1 for first eigenvector
Sympy does seem to always return a 1 on the last nonzero entry of the vector. Is that the normalisation you are looking for? Here it is how it is computed, by the way: https://github.com/sympy/sympy/blob/26f7bdbe3f860e7b4492e102edec2d6b429b5aaf... /David.
Best
Louis Petingi Professor of Computer Science College of Staten Island City University of NY  *From:* Ilhan Polat
*Sent:* Saturday, February 25, 2023 11:46 AM *To:* Discussion of Numerical Python *Subject:* [Numpydiscussion] Re: non normalised eigenvectors Could you elaborate a bit more about what you mean with original eigenvectors? They denote the direction hence you can scale them to any size anyways.
On Sat, Feb 25, 2023 at 5:38 PM
wrote: Dear all,
I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy.
Thanks
LP _______________________________________________ NumPyDiscussion mailing list  numpydiscussion@python.org To unsubscribe send an email to numpydiscussionleave@python.org https://mail.python.org/mailman3/lists/numpydiscussion.python.org/ https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.python.org%2Fmailman3%2Flists%2Fnumpydiscussion.python.org%2F&data=05%7C01%7CLOUIS.PETINGI06%40CUNY907.mail.onmicrosoft.com%7C836b31ea969149d5531908db17507377%7C6f60f0b35f064e099715989dba8cc7d8%7C0%7C0%7C638129406524877235%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=5zj2Nd3JCW8wWgBmvZ%2BekSP0sd31RlneOTZGIiPbR%2Fg%3D&reserved=0 Member address: ilhanpolat@gmail.com
_______________________________________________ NumPyDiscussion mailing list  numpydiscussion@python.org To unsubscribe send an email to numpydiscussionleave@python.org https://mail.python.org/mailman3/lists/numpydiscussion.python.org/ Member address: davidmenhur@gmail.com
On Sat, Feb 25, 2023 at 11:39 AM
Dear all,
I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy.
I'm not sure what you mean by "the original vectors". All multiples of the unit eigenvector are eigenvectors. None have a claim on being "the original vector". Do you have a reference for what you are referring to? It's possible that there are specific procedures that happen to spit out vectors that are eigenvectors but have semantics about the magnitude, but `np.linalg.eig()` does not implement that procedure. The semantics about the magnitude would be supplied by that specific procedure.  Robert Kern
participants (5)

David Menéndez Hurtado

Ilhan Polat

Louis Petingi

louis.petingi＠csi.cuny.edu

Robert Kern