[Numpy-discussion] performance solving system of equations in numpy and MATLAB

Edison Gustavo Muenz edisongustavo at gmail.com
Wed Dec 16 13:34:32 EST 2015


Sometime ago I saw this: https://software.intel.com/sites/campaigns/nest/

I don't know if the "community" license applies in your case though. It is
worth taking a look at.

On Wed, Dec 16, 2015 at 4:30 PM, Francesc Alted <faltet at gmail.com> wrote:

> Sorry, I have to correct myself, as per:
> http://docs.continuum.io/mkl-optimizations/index it seems that Anaconda
> is not linking with MKL by default (I thought that was the case before?).
> After installing MKL (conda install mkl), I am getting:
>
> In [1]: import numpy as np
> Vendor:  Continuum Analytics, Inc.
> Package: mkl
> Message: trial mode expires in 30 days
>
> In [2]: testA = np.random.randn(15000, 15000)
>
> In [3]: testb = np.random.randn(15000)
>
> In [4]: %time testx = np.linalg.solve(testA, testb)
> CPU times: user 1min, sys: 468 ms, total: 1min 1s
> Wall time: 15.3 s
>
>
> so, it looks like you will need to buy a MKL license separately (which
> makes sense for a commercial product).
>
> Sorry for the confusion.
> Francesc
>
>
> 2015-12-16 18:59 GMT+01:00 Francesc Alted <faltet at gmail.com>:
>
>> Hi,
>>
>> Probably MATLAB is shipping with Intel MKL enabled, which probably is the
>> fastest LAPACK implementation out there.  NumPy supports linking with MKL,
>> and actually Anaconda does that by default, so switching to Anaconda would
>> be a good option for you.
>>
>> Here you have what I am getting with Anaconda's NumPy and a machine with
>> 8 cores:
>>
>> In [1]: import numpy as np
>>
>> In [2]: testA = np.random.randn(15000, 15000)
>>
>> In [3]: testb = np.random.randn(15000)
>>
>> In [4]: %time testx = np.linalg.solve(testA, testb)
>> CPU times: user 5min 36s, sys: 4.94 s, total: 5min 41s
>> Wall time: 46.1 s
>>
>> This is not 20 sec, but it is not 3 min either (but of course that
>> depends on your machine).
>>
>> Francesc
>>
>> 2015-12-16 18:34 GMT+01:00 Edward Richards <edwardlrichards at gmail.com>:
>>
>>> I recently did a conceptual experiment to estimate the computational
>>> time required to solve an exact expression in contrast to an approximate
>>> solution (Helmholtz vs. Helmholtz-Kirchhoff integrals). The exact solution
>>> requires a matrix inversion, and in my case the matrix would contain ~15000
>>> rows.
>>>
>>> On my machine MATLAB seems to perform this matrix inversion with random
>>> matrices about 9x faster (20 sec vs 3 mins). I thought the performance
>>> would be roughly the same because I presume both rely on the same
>>> LAPACK solvers.
>>>
>>> I will not actually need to solve this problem (even at 20 sec it is
>>> prohibitive for broadband simulation), but if I needed to I would
>>> reluctantly choose MATLAB . I am simply wondering why there is this
>>> performance gap, and if there is a better way to solve this problem in
>>> numpy?
>>>
>>> Thank you,
>>>
>>> Ned
>>>
>>> #Python version
>>>
>>> import numpy as np
>>>
>>> testA = np.random.randn(15000, 15000)
>>>
>>> testb = np.random.randn(15000)
>>>
>>> %time testx = np.linalg.solve(testA, testb)
>>>
>>> %MATLAB version
>>>
>>> testA = randn(15000);
>>>
>>> testb = randn(15000, 1);
>>> tic(); testx = testA \ testb; toc();
>>>
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion at scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>>
>> --
>> Francesc Alted
>>
>
>
>
> --
> Francesc Alted
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20151216/0f1fe32d/attachment.html>


More information about the NumPy-Discussion mailing list