Hi, When doing matrix-matrix multiplications with large matrices, using the BLAS library (Basic Linear Algebra Subprograms) can speed up things a lot. I don't think Numeric takes advantage of this (is this correct?). Will numarray be able to do that? Jens
-----Original Message----- From: numpy-discussion-admin@lists.sourceforge.net [mailto:numpy-discussion-admin@lists.sourceforge.net] On Behalf Of Jens Jorgen Mortensen Sent: Thursday, February 20, 2003 3:59 AM To: numpy-discussion@lists.sourceforge.net Subject: [Numpy-discussion] BLAS
Hi,
When doing matrix-matrix multiplications with large matrices, using the BLAS library (Basic Linear Algebra Subprograms) can speed up things a lot. I don't think Numeric takes advantage of this (is this correct?).
No. You can configure it at installation to use the BLAS of choice.
Will numarray be able to do that?
Jens
------------------------------------------------------- This SF.net email is sponsored by: SlickEdit Inc. Develop an edge. The most comprehensive and flexible code editor you can use. Code faster. C/C++, C#, Java, HTML, XML, many more. FREE 30-Day Trial. www.slickedit.com/sourceforge _______________________________________________ Numpy-discussion mailing list Numpy-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion
Hi, As Paul Dubois says, some Numeric functions can be configured to use the BLAS library. However, the BLAS is not used for, perhaps the most common and important operation: matrix/vector multiplication. We have written a small patch to interface to replace the matrixproduct/dot/innerproduct functions in multiarraymodule.c with the appropriate BLAS calls. The patch (against Numeric 21.1b) can be found at http://www.dcs.ex.ac.uk/~aschmolc/Numeric and can give a speed up of a factor of 40 on 1000 by 1000 matrices using the Atlas BLAS. More details of the (naive!) timings can be found there too. We had planned on making a general announcement of this patch (updated to suit Numeric 22) in a week or so. However, we have just noticed that Numeric.dot (=Numeric.innerproduct = Numeric.matrixmultiply) does not take the complex conjugate of its first argument. Taking the complex conjugate seems to me to be the right thing for a routine named dot or innerproduct. Indeed, until we were bitten by it not taking the conjugate, I thought it did. Can someone here explain the rational behind having dot, innerproduct and matrixmultiply all do the same thing and none of them taking the conjugate? (Matlab dot() takes the conjugate, although Matlab mtimes() (called for A*B) does not). I would propose that innerproduct and dot be changed to take the conjugate and a new function that doesn't (say, mtimes) be introduced. I suspect, however, that this would break too much existing code. It would be nice to get it right in Numarray. Alternatively, can someone suggest how both functions can be conveniently and non-confusingly exposed? Richard. Paul F Dubois <Paul> writes:
-----Original Message----- From: numpy-discussion-admin@lists.sourceforge.net [mailto:numpy-discussion-admin@lists.sourceforge.net] On Behalf Of Jens Jorgen Mortensen Sent: Thursday, February 20, 2003 3:59 AM To: numpy-discussion@lists.sourceforge.net Subject: [Numpy-discussion] BLAS
Hi,
When doing matrix-matrix multiplications with large matrices, using the BLAS library (Basic Linear Algebra Subprograms) can speed up things a lot. I don't think Numeric takes advantage of this (is this correct?).
No. You can configure it at installation to use the BLAS of choice.
Will numarray be able to do that?
Jens
R.M.Everson@exeter.ac.uk (R.M.Everson) writes:
Hi,
As Paul Dubois says, some Numeric functions can be configured to use the BLAS library. However, the BLAS is not used for, perhaps the most common and important operation: matrix/vector multiplication.
We have written a small patch to interface to replace the matrixproduct/dot/innerproduct functions in multiarraymodule.c with the appropriate BLAS calls.
The patch (against Numeric 21.1b) can be found at http://www.dcs.ex.ac.uk/~aschmolc/Numeric and can give a speed up of a factor of 40 on 1000 by 1000 matrices using the Atlas BLAS. More details of the (naive!) timings can be found there too.
An addendum: the new version is no longer a patch against Numeric, but a separate module, currently called 'dotblas', which is a cleaner approach as it doesn't require using a modified version of Numeric. To use this fast dot instaed of Numeric's dot, you can e.g do: import Numeric # no errors if dotblas isn't installed try: import dotblas Numeric.dot = dotblas.dot except ImportError: pass I just put a prerelease (which still handles complex arrays DIFFERENTLY from Numeric!!!) online at: http://www.dcs.ex.ac.uk/~aschmolc/Numeric/dotblas.html enjoy, alex
participants (4)
-
Alexander Schmolck
-
Jens Jorgen Mortensen
-
Paul F Dubois
-
R.M.Everson@exeter.ac.uk