non normalised eigenvectors

Dear all, I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy. Thanks LP

Could you elaborate a bit more about what you mean with original eigenvectors? They denote the direction hence you can scale them to any size anyways. On Sat, Feb 25, 2023 at 5:38 PM <louis.petingi@csi.cuny.edu> wrote:
Dear all,
I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy.
Thanks
LP _______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-leave@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: ilhanpolat@gmail.com

Thank you for the reply. I am working with the Laplacian matrix of a graph which is the Degree matrix minus the adjacency matrix. The Laplacian is a symmetric matrix and the smallest eigenvalue is zero. As the rows add it to 0, Lx=0x, and 1 is the resulting vector. The normalized eigenvector is the 1 vector divided by the norm. So if a have 10 vertices of the graph the normalized eigenvector is 1/sqrt(10). I do understand that a scale * normalized eigenvector is also a solution but for the purpose of my research I need the normalized eigenvector * norm. For the 0 eigenvalue the norm of the eigenvector is easy to figure out but not for the other eigenvalues. That is what I meant by the original eigenvector and sorry for the confusion the confusion. Most eigenvalues/eigenvalues calculators will give you 1 for first eigenvector Best Louis Petingi Professor of Computer Science College of Staten Island City University of NY ________________________________ From: Ilhan Polat <ilhanpolat@gmail.com> Sent: Saturday, February 25, 2023 11:46 AM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: non normalised eigenvectors Could you elaborate a bit more about what you mean with original eigenvectors? They denote the direction hence you can scale them to any size anyways. On Sat, Feb 25, 2023 at 5:38 PM <louis.petingi@csi.cuny.edu<mailto:louis.petingi@csi.cuny.edu>> wrote: Dear all, I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy. Thanks LP _______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org<mailto:numpy-discussion@python.org> To unsubscribe send an email to numpy-discussion-leave@python.org<mailto:numpy-discussion-leave@python.org> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.python.org%2Fmailman3%2Flists%2Fnumpy-discussion.python.org%2F&data=05%7C01%7CLOUIS.PETINGI06%40CUNY907.mail.onmicrosoft.com%7C836b31ea969149d5531908db17507377%7C6f60f0b35f064e099715989dba8cc7d8%7C0%7C0%7C638129406524877235%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=5zj2Nd3JCW8wWgBmvZ%2BekSP0sd31RlneOTZGIiPbR%2Fg%3D&reserved=0> Member address: ilhanpolat@gmail.com<mailto:ilhanpolat@gmail.com>

On Sat, Feb 25, 2023 at 2:11 PM Louis Petingi <Louis.Petingi@csi.cuny.edu> wrote:
Thank you for the reply. I am working with the Laplacian matrix of a graph which is the Degree matrix minus the adjacency matrix. The Laplacian is a symmetric matrix and the smallest eigenvalue is zero. As the rows add it to 0, Lx=0x, and 1 is the resulting vector. The normalized eigenvector is the 1 vector divided by the norm. So if a have 10 vertices of the graph the normalized eigenvector is 1/sqrt(10). I do understand that a scale * normalized eigenvector is also a solution but for the purpose of my research I need the normalized eigenvector * norm.
I apologize, but I'm going to harp on this: you are using the word "the" as if there is one unique magnitude that we could report. There is none. If you have a use for a specific convention for the magnitudes, you'll have to point us to the literature that talks about it, and we might be able to give you pointers as to how to get the magnitudes that your research requires. For the 0 eigenvalue the norm of the eigenvector is easy to figure out but
not for the other eigenvalues.
That is what I meant by the original eigenvector and sorry for the confusion the confusion. Most eigenvalues/eigenvalues calculators will give you 1 for first eigenvector
I'm afraid that doesn't really narrow anything down for us. -- Robert Kern

Hi Thanks Very simply one of the solutions for the zero eigenvalue is the 1 eigenvector. If I get back this 1 vector, for the 0 eigenvalue then the other eigenvectors will be in the right format I am looking for. Once again, the 1 vector is the normalized eigenvector * norm. Best Louis Petingi Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: Robert Kern <robert.kern@gmail.com> Sent: Saturday, February 25, 2023 2:28:45 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: non normalised eigenvectors On Sat, Feb 25, 2023 at 2:11 PM Louis Petingi <Louis.Petingi@csi.cuny.edu<mailto:Louis.Petingi@csi.cuny.edu>> wrote: Thank you for the reply. I am working with the Laplacian matrix of a graph which is the Degree matrix minus the adjacency matrix. The Laplacian is a symmetric matrix and the smallest eigenvalue is zero. As the rows add it to 0, Lx=0x, and 1 is the resulting vector. The normalized eigenvector is the 1 vector divided by the norm. So if a have 10 vertices of the graph the normalized eigenvector is 1/sqrt(10). I do understand that a scale * normalized eigenvector is also a solution but for the purpose of my research I need the normalized eigenvector * norm. I apologize, but I'm going to harp on this: you are using the word "the" as if there is one unique magnitude that we could report. There is none. If you have a use for a specific convention for the magnitudes, you'll have to point us to the literature that talks about it, and we might be able to give you pointers as to how to get the magnitudes that your research requires. For the 0 eigenvalue the norm of the eigenvector is easy to figure out but not for the other eigenvalues. That is what I meant by the original eigenvector and sorry for the confusion the confusion. Most eigenvalues/eigenvalues calculators will give you 1 for first eigenvector I'm afraid that doesn't really narrow anything down for us. -- Robert Kern

On Sat, Feb 25, 2023 at 4:11 PM Louis Petingi <Louis.Petingi@csi.cuny.edu> wrote:
Hi Thanks
Very simply one of the solutions for the zero eigenvalue is the 1 eigenvector. If I get back this 1 vector, for the 0 eigenvalue then the other eigenvectors will be in the right format I am looking for. Once again, the 1 vector is the normalized eigenvector * norm.
There are multiple rules that could get this result. You could multiply all of the eigenvectors by `sqrt(n)`. Alternately, you could multiply each eigenvector `v` by `1/v[np.nonzero(v)[0][0]]` (i.e. the first nonzero value in the eigenvector). Both would give you the desired result for the 0-eigenvalue eigenvector, but different results for every other eigenvector. Both are simple and coherent rules, but quite likely neither are what you are looking for. Remember, each eigenvector can be scaled independently of any other, so establishing the desired result for one eigenvector does nothing to constrain any other. If you want help finding the rule that will help you in your research, you'll have to let us know what you are going to use the magnitudes _for_. The information you've given so far (where the eigenvectors come _from_) doesn't actually help narrow anything down. The only application of eigenvector magnitudes of graph Laplacians that I am aware of is the Fiedler vector, and that actually requires unit eigenvectors. -- Robert Kern

As you mentioned this is a generalization of the Fiedler eigenvector. When applying spectral clustering, and you want to find the two clusters then the Fiedler eigenvector tells you how to partition the vertices (bipartition) so the normalized cut is minimized. The concept can be generalized to k clusters by applying K-mean to the first k eigenvectors. The two partitions can be determined by grouping the vertices of the corresponding negative values of the eigenvector for one cluster and the other cluster are vertices corresponding to the non-negative values. I can use perhaps the normalized eigenvectors, but I am not sure if this is correct thus, I prefer using the magnitudes. In theory it may work as for example the entries of the Fielder eigenvector are divided by the norm. The eigenvalues and eigenvector calculators (in internet) find the magnitude of the vector. The project is aimed to show that spectral clustering works for images that are graphs where the pizels are the vertices. We are having some issues with K-mean but if we get the Fieldler eigenvector we can test how K-mean works for two partitions. Best Louis ________________________________ From: Robert Kern <robert.kern@gmail.com> Sent: Saturday, February 25, 2023 4:22 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: non normalised eigenvectors On Sat, Feb 25, 2023 at 4:11 PM Louis Petingi <Louis.Petingi@csi.cuny.edu<mailto:Louis.Petingi@csi.cuny.edu>> wrote: Hi Thanks Very simply one of the solutions for the zero eigenvalue is the 1 eigenvector. If I get back this 1 vector, for the 0 eigenvalue then the other eigenvectors will be in the right format I am looking for. Once again, the 1 vector is the normalized eigenvector * norm. There are multiple rules that could get this result. You could multiply all of the eigenvectors by `sqrt(n)`. Alternately, you could multiply each eigenvector `v` by `1/v[np.nonzero(v)[0][0]]` (i.e. the first nonzero value in the eigenvector). Both would give you the desired result for the 0-eigenvalue eigenvector, but different results for every other eigenvector. Both are simple and coherent rules, but quite likely neither are what you are looking for. Remember, each eigenvector can be scaled independently of any other, so establishing the desired result for one eigenvector does nothing to constrain any other. If you want help finding the rule that will help you in your research, you'll have to let us know what you are going to use the magnitudes _for_. The information you've given so far (where the eigenvectors come _from_) doesn't actually help narrow anything down. The only application of eigenvector magnitudes of graph Laplacians that I am aware of is the Fiedler vector, and that actually requires unit eigenvectors. -- Robert Kern

On Sat, Feb 25, 2023 at 5:33 PM Louis Petingi <Louis.Petingi@csi.cuny.edu> wrote:
As you mentioned this is a generalization of the Fiedler eigenvector. When applying spectral clustering, and you want to find the two clusters then the Fiedler eigenvector tells you how to partition the vertices (bipartition) so the normalized cut is minimized. The concept can be generalized to k clusters by applying K-mean to the first k eigenvectors. The two partitions can be determined by grouping the vertices of the corresponding negative values of the eigenvector for one cluster and the other cluster are vertices corresponding to the non-negative values.
I can use perhaps the normalized eigenvectors, but I am not sure if this is correct thus, I prefer using the magnitudes. In theory it may work as for example the entries of the Fielder eigenvector are divided by the norm.
My understanding of this family of techniques is that the K-means step is basically heuristic (see section 8.4 of [1]). You have a lot of choices here, and playing around with the magnitudes might be among them. But there is nothing that comes _from_ the eigenvector calculation that can help determine what this should be. There's nothing special about the way that `np.linalg.eig()` computes eigenvectors before normalization that will improve the K-means step. Rather, you have the freedom to scale the eigenvectors freely to help out the K-means calculation (or any other clustering that you might want to do there). I suspect that the unit-normalization is probably one of the better ones. [1] http://people.csail.mit.edu/dsontag/courses/ml14/notes/Luxburg07_tutorial_sp...
The eigenvalues and eigenvector calculators (in internet) find the magnitude of the vector.
No, they do not find "the magnitude" because there is no such thing. They just give _an_ arbitrary magnitude that happened to be the raw output of their _particular_ algorithm without the following unit-normalization step. There are lots of algorithms to compute eigenvectors. The magnitudes of the eigenvector solutions before normalization are not all the same for all of the different algorithms. There is absolutely no guarantee (and very little probability) that if we modified `np.linalg.eig()` to not apply normalization that the unnormalized eigenvectors would match the unnormalized eigenvectors of any other implementation. -- Robert Kern

Hi Robert I read somewhere that we can use the unit vector times a scalar for the Friedler eigenvector. Thus, the question is that for the first k-eigenvectors do we multiply the corresponding unit vectors them by the same scalar? That said my feeling is that when applying k-mean on the first k eigenvectors will give you different clusters as when you multiply the unit eigenvectors by different scalars the k-dimensional coordinates will be different and the distances between the points will also change. We could try starting with the unit vectors and multiply by different scalars and see what we get. As you mentioned k-mean is a heuristic and perhaps there is a graph-theoretical approach to find the k clusters by generalizing Fiedler result. Thanks, and I'll let you know Louis ________________________________ From: Louis Petingi <Louis.Petingi@csi.cuny.edu> Sent: Saturday, February 25, 2023 5:25 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Dear Robert As you mentioned this is a generalization of the Fiedler eigenvector. When applying spectral clustering, and you want to find the two clusters then the Fiedler eigenvector tells you how to partition the vertices (bipartition) so the normalized cut is minimized. The concept can be generalized to k clusters by applying K-mean to the first k eigenvectors. The two partitions can be determined by grouping the vertices of the corresponding negative values of the eigenvector for one cluster and the other cluster are vertices corresponding to the non-negative values. I can use perhaps the normalized eigenvectors, but I am not sure if this is correct thus, I prefer using the magnitudes. In theory it may work as for example the entries of the Fielder eigenvector are divided by the norm. The eigenvalues and eigenvector calculators (in internet) find the magnitude of the vector. The project is aimed to show that spectral clustering works for images that are graphs where the pizels are the vertices. We are having some issues with K-mean but if we get the Fieldler eigenvector we can test how K-mean works for two partitions. Best Louis ________________________________ From: Robert Kern <robert.kern@gmail.com> Sent: Saturday, February 25, 2023 4:22 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: non normalised eigenvectors On Sat, Feb 25, 2023 at 4:11 PM Louis Petingi <Louis.Petingi@csi.cuny.edu<mailto:Louis.Petingi@csi.cuny.edu>> wrote: Hi Thanks Very simply one of the solutions for the zero eigenvalue is the 1 eigenvector. If I get back this 1 vector, for the 0 eigenvalue then the other eigenvectors will be in the right format I am looking for. Once again, the 1 vector is the normalized eigenvector * norm. There are multiple rules that could get this result. You could multiply all of the eigenvectors by `sqrt(n)`. Alternately, you could multiply each eigenvector `v` by `1/v[np.nonzero(v)[0][0]]` (i.e. the first nonzero value in the eigenvector). Both would give you the desired result for the 0-eigenvalue eigenvector, but different results for every other eigenvector. Both are simple and coherent rules, but quite likely neither are what you are looking for. Remember, each eigenvector can be scaled independently of any other, so establishing the desired result for one eigenvector does nothing to constrain any other. If you want help finding the rule that will help you in your research, you'll have to let us know what you are going to use the magnitudes _for_. The information you've given so far (where the eigenvectors come _from_) doesn't actually help narrow anything down. The only application of eigenvector magnitudes of graph Laplacians that I am aware of is the Fiedler vector, and that actually requires unit eigenvectors. -- Robert Kern

Hi Robert Just a follow up. I was able (my student) to get the 1 vector from the 0 eigenvector. Even though the normalized or this set of eigenvectors will work we could try the two sets. Not sure if multiplying the unit vectors by a scalar is the correct approach. The paper that you sent me I applied the first algorithm using the unnormalized Laplacian (first algorithm of the three shown on page 6) when I taught this course at CUNY's Graduate Center. This is how we got the 1 vector. [cid:9ce8922f-1f1c-4899-9209-8df61e50a2b9] Thanks ________________________________ From: Louis Petingi <Louis.Petingi@csi.cuny.edu> Sent: Saturday, February 25, 2023 7:14 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: Dear Robert Hi Robert I read somewhere that we can use the unit vector times a scalar for the Friedler eigenvector. Thus, the question is that for the first k-eigenvectors do we multiply the corresponding unit vectors them by the same scalar? That said my feeling is that when applying k-mean on the first k eigenvectors will give you different clusters as when you multiply the unit eigenvectors by different scalars the k-dimensional coordinates will be different and the distances between the points will also change. We could try starting with the unit vectors and multiply by different scalars and see what we get. As you mentioned k-mean is a heuristic and perhaps there is a graph-theoretical approach to find the k clusters by generalizing Fiedler result. Thanks, and I'll let you know Louis ________________________________ From: Louis Petingi <Louis.Petingi@csi.cuny.edu> Sent: Saturday, February 25, 2023 5:25 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Dear Robert As you mentioned this is a generalization of the Fiedler eigenvector. When applying spectral clustering, and you want to find the two clusters then the Fiedler eigenvector tells you how to partition the vertices (bipartition) so the normalized cut is minimized. The concept can be generalized to k clusters by applying K-mean to the first k eigenvectors. The two partitions can be determined by grouping the vertices of the corresponding negative values of the eigenvector for one cluster and the other cluster are vertices corresponding to the non-negative values. I can use perhaps the normalized eigenvectors, but I am not sure if this is correct thus, I prefer using the magnitudes. In theory it may work as for example the entries of the Fielder eigenvector are divided by the norm. The eigenvalues and eigenvector calculators (in internet) find the magnitude of the vector. The project is aimed to show that spectral clustering works for images that are graphs where the pizels are the vertices. We are having some issues with K-mean but if we get the Fieldler eigenvector we can test how K-mean works for two partitions. Best Louis ________________________________ From: Robert Kern <robert.kern@gmail.com> Sent: Saturday, February 25, 2023 4:22 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: non normalised eigenvectors On Sat, Feb 25, 2023 at 4:11 PM Louis Petingi <Louis.Petingi@csi.cuny.edu<mailto:Louis.Petingi@csi.cuny.edu>> wrote: Hi Thanks Very simply one of the solutions for the zero eigenvalue is the 1 eigenvector. If I get back this 1 vector, for the 0 eigenvalue then the other eigenvectors will be in the right format I am looking for. Once again, the 1 vector is the normalized eigenvector * norm. There are multiple rules that could get this result. You could multiply all of the eigenvectors by `sqrt(n)`. Alternately, you could multiply each eigenvector `v` by `1/v[np.nonzero(v)[0][0]]` (i.e. the first nonzero value in the eigenvector). Both would give you the desired result for the 0-eigenvalue eigenvector, but different results for every other eigenvector. Both are simple and coherent rules, but quite likely neither are what you are looking for. Remember, each eigenvector can be scaled independently of any other, so establishing the desired result for one eigenvector does nothing to constrain any other. If you want help finding the rule that will help you in your research, you'll have to let us know what you are going to use the magnitudes _for_. The information you've given so far (where the eigenvectors come _from_) doesn't actually help narrow anything down. The only application of eigenvector magnitudes of graph Laplacians that I am aware of is the Fiedler vector, and that actually requires unit eigenvectors. -- Robert Kern

On Sat, Feb 25, 2023 at 8:17 PM Louis Petingi <Louis.Petingi@csi.cuny.edu> wrote:
Hi Robert
Just a follow up. I was able (my student) to get the 1 vector from the 0 eigenvector. Even though the normalized or this set of eigenvectors will work we could try the two sets. Not sure if multiplying the unit vectors by a scalar is the correct approach.
Yeah, that essentially amounts to multiplying all of the eigenvectors by `np.sqrt(n)`. I don't imagine it will do much to change the results of K-means or other clustering algorithm unless if it has absolute distance thresholds embedded in it. But it's absolutely a free choice for you to precondition things for your clustering algorithm. -- Robert Kern

On Sat, 25 Feb 2023 at 20:09, Louis Petingi <Louis.Petingi@csi.cuny.edu> wrote:
Thank you for the reply. I am working with the Laplacian matrix of a graph which is the Degree matrix minus the adjacency matrix. The Laplacian is a symmetric matrix and the smallest eigenvalue is zero. As the rows add it to 0, Lx=0x, and 1 is the resulting vector.
The ones vector is one solution, but so is all 2, or any other vector where all the entries are equal. There is nothing in the eigenvector that makes one more "worthy" than others, except for in particular applications.
That is what I meant by the original eigenvector and sorry for the confusion the confusion. Most eigenvalues/eigenvalues calculators will give you 1 for first eigenvector
Sympy does seem to always return a 1 on the last non-zero entry of the vector. Is that the normalisation you are looking for? Here it is how it is computed, by the way: https://github.com/sympy/sympy/blob/26f7bdbe3f860e7b4492e102edec2d6b429b5aaf... /David.
Best
Louis Petingi Professor of Computer Science College of Staten Island City University of NY ------------------------------ *From:* Ilhan Polat <ilhanpolat@gmail.com> *Sent:* Saturday, February 25, 2023 11:46 AM *To:* Discussion of Numerical Python <numpy-discussion@python.org> *Subject:* [Numpy-discussion] Re: non normalised eigenvectors
Could you elaborate a bit more about what you mean with original eigenvectors? They denote the direction hence you can scale them to any size anyways.
On Sat, Feb 25, 2023 at 5:38 PM <louis.petingi@csi.cuny.edu> wrote:
Dear all,
I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy.
Thanks
LP _______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-leave@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ <https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.python.org%2Fmailman3%2Flists%2Fnumpy-discussion.python.org%2F&data=05%7C01%7CLOUIS.PETINGI06%40CUNY907.mail.onmicrosoft.com%7C836b31ea969149d5531908db17507377%7C6f60f0b35f064e099715989dba8cc7d8%7C0%7C0%7C638129406524877235%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=5zj2Nd3JCW8wWgBmvZ%2BekSP0sd31RlneOTZGIiPbR%2Fg%3D&reserved=0> Member address: ilhanpolat@gmail.com
_______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-leave@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: davidmenhur@gmail.com

On Sat, Feb 25, 2023 at 11:39 AM <louis.petingi@csi.cuny.edu> wrote:
Dear all,
I am not an expert in NumPy but my undergraduate student is having some issues with the way Numpy returns the normalized eigenvectors corresponding to the eigenvalues. We do understand that an eigenvector is divided by the norm to get the unit eigenvectors, however we do need the original vectors for the purpose of my research. This has been a really frustrated experience as NumPy returns the normalized vectors as a default. I appreciate any suggestions of how to go about this issue. This seems to be a outstanding issue from people using Numpy.
I'm not sure what you mean by "the original vectors". All multiples of the unit eigenvector are eigenvectors. None have a claim on being "the original vector". Do you have a reference for what you are referring to? It's possible that there are specific procedures that happen to spit out vectors that are eigenvectors but have semantics about the magnitude, but `np.linalg.eig()` does not implement that procedure. The semantics about the magnitude would be supplied by that specific procedure. -- Robert Kern
participants (5)
-
David Menéndez Hurtado
-
Ilhan Polat
-
Louis Petingi
-
louis.petingi@csi.cuny.edu
-
Robert Kern