[Numpy-discussion] Fwd: Backslash operator A\b and np/sp.linalg.solve

Ilhan Polat ilhanpolat at gmail.com
Mon Jan 9 20:09:48 EST 2017


Indeed, generic is the cheapest discovery including the worst case that
only the last off-diagonal element is nonzero, a pseudo code is first
remove the diagonals check the remaining parts for nonzero, then check the
upper triangle then lower, then morally triangularness from zero structure
if any then bandedness and so on. If you have access to matlab, then you
can set the sparse monitor to verbose mode " spparms('spumoni', 1) " and
perform a backslash operation on sparse matrices. It will spit out what it
does during the checks.

    A = sparse([0 2 0 1 0; 4 -1 -1 0 0; 0 0 0 3 -6; -2 0 0 0 2; 0 0 4 2 0]);
    B = sparse([8; -1; -18; 8; 20]);
    spparms('spumoni',1)
    x = A\B

So every test in the polyalgorithm is cheaper than the next one. I'm not
exactly sure what might be the best strategy yet hence the question. It's
really interesting that LAPACK doesn't have this type of fast checks.

On Mon, Jan 9, 2017 at 8:30 PM, <josef.pktd at gmail.com> wrote:

>
>
> On Mon, Jan 9, 2017 at 6:27 AM, Ilhan Polat <ilhanpolat at gmail.com> wrote:
>
>> > Note that you're proposing a new scipy feature (right?) on the numpy
>> list....
>>
>> > This sounds like a good idea to me. As a former heavy Matlab user I
>> remember a lot of things to dislike, but "\" behavior was quite nice.
>>
>> Correct, I am not sure where this might go in. It seemed like a NumPy
>> array operation (touching array elements rapidly etc. can also be added for
>> similar functionalities other than solve) hence the NumPy list. But of
>> course it can be pushed as an exclusive SciPy feature. I'm not sure what
>> the outlook on np.linalg.solve is.
>>
>>
>> > How much is a noticeable slowdown? Note that we still have the current
>> interfaces available for users that know what they need, so a nice
>> convenience function that is say 5-10% slower would not be the end of the
>> world.
>>
>> the fastest case was around 150-400% slower but of course it might be the
>> case that I'm not using the fastest methods. It was mostly shuffling things
>> around and using np.any on them in the pure python3 case. I will cook up
>> something again for the baseline as soon as I have time.
>>
>>
>>
> All this checks sound a bit expensive, if we have almost always completely
> unstructured arrays that don't satisfy any special matrix pattern.
>
> In analogy to the type proliferation in Julia to handle those cases: Is
> there a way to attach information to numpy arrays that for example signals
> that a 2d array is hermitian, banded or diagonal or ...?
>
> (After second thought: maybe completely unstructured is not too expensive
> to detect if the checks are short-circuited, one off diagonal element
> nonzero - not diagonal, two opposite diagonal different - not symmetric,
> ...)
>
> Josef
>
>
>
>
>>
>>
>>
>>
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion at scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20170110/c645e13f/attachment.html>


More information about the NumPy-Discussion mailing list