On Sun, Mar 16, 2014 at 10:54 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Sun, Mar 16, 2014 at 2:39 PM, Eelco HoogendoornDifferent people work on different code and have different experiences
<hoogendoorn.eelco@gmail.com> wrote:
> Note that I am not opposed to extra operators in python, and only mildly
> opposed to a matrix multiplication operator in numpy; but let me lay out the
> case against, for your consideration.
>
> First of all, the use of matrix semantics relative to arrays semantics is
> extremely rare; even in linear algebra heavy code, arrays semantics often
> dominate. As such, the default of array semantics for numpy has been a great
> choice. Ive never looked back at MATLAB semantics.
here -- yours may or may be typical yours. Pauli did some quick checks
on scikit-learn & nipy & scipy, and found that in their test suites,
uses of np.dot and uses of elementwise-multiplication are ~equally
common: https://github.com/numpy/numpy/pull/4351#issuecomment-37717330h
My impression from the other thread is that @@ probably won't end up
> Secondly, I feel the urge to conform to a historical mathematical notation
> is misguided, especially for the problem domain of linear algebra. Perhaps
> in the world of mathematics your operation is associative or commutes, but
> on your computer, the order of operations will influence both outcomes and
> performance. Even for products, we usually care not only about the outcome,
> but also how that outcome is arrived at. And along the same lines, I don't
> suppose I need to explain how I feel about A@@-1 and the likes. Sure, it
> isn't to hard to learn or infer this implies a matrix inverse, but why on
> earth would I want to pretend the rich complexity of numerical matrix
> inversion can be mangled into one symbol? Id much rather write inv or pinv,
> or whatever particular algorithm happens to be called for given the
> situation. Considering this isn't the num-lisp discussion group, I suppose I
> am hardly the only one who feels so.
>
existing, so you're safe here ;-).
Einstein notation is coming up on its 100th birthday and is just as
> On the whole, I feel the @ operator is mostly superfluous. I prefer to be
> explicit about where I place my brackets. I prefer to be explicit about the
> data layout and axes that go into a (multi)linear product, rather than rely
> on obtuse row/column conventions which are not transparent across function
> calls. When I do linear algebra, it is almost always vectorized over
> additional axes; how does a special operator which is only well defined for
> a few special cases of 2d and 1d tensors help me with that?
blackboard-friendly as matrix product notation. Yet there's still a
huge number of domains where the matrix notation dominates. It's cool
if you aren't one of the people who find it useful, but I don't think
it's going anywhere soon.
The analysis in the PEP found ~780 calls to np.dot, just in the two
> Note that I don't think there is much harm in an @ operator; but I don't see
> myself using it either. Aside from making textbook examples like a
> gram-schmidt orthogonalization more compact to write, I don't see it having
> much of an impact in the real world.
projects I happened to look at. @ will get tons of use in the real
world. Maybe all those people who will be using it would be happier if
they were using einsum instead, I dunno, but it's an argument you'll
have to convince them of, not me :-).Just as exampleI just read for the first time two journal articles in econometrics that use einsum notation.I have no idea what their formulas are supposed to mean, no sum signs and no matrix algebra.I need to have a strong incentive to stare at those formulas again.(statsmodels search finds 1520 "dot", including sandbox and examples)Josef<TODO: learn how to use einsums>
-n
--
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion