[Numpy-discussion] performance matrix multiplication vs. matlab

Sebastian Walter sebastian.walter at gmail.com
Fri Jun 5 06:03:15 EDT 2009


On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbert<sccolbert at gmail.com> wrote:
> I should update after reading the thread Sebastian linked:
>
> The current 1.3 version of numpy (don't know about previous versions) uses
> the optimized Atlas BLAS routines for numpy.dot() if numpy was compiled with
> these libraries. I've verified this on linux only, thought it shouldnt be
> any different on windows AFAIK.

in the  best of all possible worlds this would be done by a package
maintainer....


>
> chris
>
> On Thu, Jun 4, 2009 at 4:54 PM, Chris Colbert <sccolbert at gmail.com> wrote:
>>
>> Sebastian is right.
>>
>> Since Matlab r2007 (i think that's the version) it has included support
>> for multi-core architecture. On my core2 Quad here at the office, r2008b has
>> no problem utilizing 100% cpu for large matrix multiplications.
>>
>>
>> If you download and build atlas and lapack from source and enable
>> parrallel threads in atlas, then compile numpy against these libraries, you
>> should achieve similar if not better performance (since the atlas routines
>> will be tuned to your system).
>>
>> If you're on Windows, you need to do some trickery to get threading to
>> work (the instructions are on the atlas website).
>>
>> Chris
>>
>>
>
>
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>



More information about the NumPy-Discussion mailing list