[Numpy-discussion] MemoryError for computing eigen-vector on 10,000*10, 000 matrix
Zhenxin Zhan
andyjian430074 at gmail.com
Wed Apr 29 01:49:58 EDT 2009
Thanks. My mistake.
The os is 32-bit. I am doing a network-simulation for my teacher. The average degree of the network topology is about 6.0. So I think it is sparse.
The paper needs the eigen values and the eigen vectors which are necessary for the further simulation. I use the following procedure:
1. read the network vertices information from a txt file to a 10,000*10,000 list 'lists'.
2. And then use numpy.array(lits, dtype=float) to get a array object 'A'
3. Finally, use numpy.linalg.eig(A) to get the eigen values and eigen vectors.
4. Using 'tofile' function to write them to local file.
I will refer to scipy.
Thanks so much.
2009-04-29
Zhenxin Zhan
发件人: Charles R Harris
发送时间: 2009-04-29 00:36:03
收件人: Discussion of Numerical Python
抄送:
主题: Re: [Numpy-discussion] MemoryError for computing eigen-vector on 10,000*10, 000 matrix
2009/4/28 Zhenxin Zhan <andyjian430074 at gmail.com>
Thanks for your reply.
My os is Windows XP SP3. I tried to use array(ojb, dtype=float), but it didn't work. And I tried 'float32' as you told me. And here is the error message:
File "C:\Python26\Lib\site-packages\numpy\linalg\linalg.py", line 791, in eig
a, t, result_t = _convertarray(a) # convert to double or cdouble type
File "C:\Python26\Lib\site-packages\numpy\linalg\linalg.py", line 727, in _con
vertarray
a = _fastCT(a.astype(t))
MemoryError
Looks like only a double routine is available for eig. Eigh is better for symmetric routines and if you only want the eigenvalues and not the eigenvectors then you should use eigvals or eigvalsh and save the space devoted to the eigenvectors, which in themselves will put you over the memory limit.
The os question is whether or not you are running a 64 bit or 32 bit os. A 64 bit os could use swap, although the routine would take forever to finish. Really, you don't have enough memory for a problem that size. Perhaps if you tell us what you want to achieve we can suggest a better approach. Also, if your matrix is sparse other algorithms might be more appropriate.
Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20090429/28d18894/attachment.html>
More information about the NumPy-Discussion
mailing list