Improving performance in matrix operations
Steven D'Aprano
steve+comp.lang.python at pearwood.info
Thu Mar 10 03:25:50 EST 2016
On Thursday 10 March 2016 07:09, Drimades wrote:
> I'm doing some tests with operations on numpy matrices in Python. As an
> example, it takes about 3000 seconds to compute eigenvalues and
> eigenvectors using scipy.linalg.eig(a) for a matrix 6000x6000. Is it an
> acceptable time?
I don't know what counts as acceptable. Do you have a thousand of these
systems to solve by next Tuesday? Or one a month? Can you adjust your
workflow to start the calculation and then go off to lunch, or do you
require interactive use?
> Any suggestions to improve?
Use smaller matrices? :-) Use a faster computer?
This may give you some ideas:
https://www.ibm.com/developerworks/community/blogs/jfp/entry/A_Comparison_Of_C_Julia_Python_Numba_Cython_Scipy_and_BLAS_on_LU_Factorization?lang=en
> Does C++ perform better with matrices?
Specifically on your computer? I don't know, you'll have to try it. The
actual time taken by a program will depend on the hardware you run it on,
not just the language it is written in.
> Another thing to consider is that matrices I'm processing are
> heavily sparse. Do they implement any parallelism? While my code is
> running, one of my cores is 100% busy, the other one 30% busy.
You might get better answers for technical questions like that from
dedicated numpy and scipy mailing lists.
--
Steve
More information about the Python-list
mailing list