On Thu, Nov 2, 2017 at 2:39 PM, Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> wrote:
Hi Josef,

Indeed, for some applications one would like to have different units
for different parts of an array. And that means that, at present, the
quantity implementations that we have are no good at storing, say, a
covariance matrix involving parameters with different units, where
thus each element of the covariance matrix has a different unit. I
fear at present it would have to be an object array instead; other
cases may be a bit easier to solve, by, e.g., allowing structured
arrays with similarly structured units. I do note that actually doing
it would clarify, e.g., what the axes in Vandermonde (spelling?)
matrices mean.

(I have problems remembering the spelling of proper names)
np.vander and various polyvander functions/methods

One point I wanted to make is that the units are overhead and irrelevant in the computation. It's the outcome that might have units.
Eg. polyfit could use various underlying polynomials, e.g. numpy.polynomial.chebyshev.chebvander(...) and various linear algebra and projection versions, and the output would still be the same units.

aside: I just found an interesting
is pairwise, but uses asanyarray

e.g. using asarray (for robust scatter)
I guess I would have problems replacing asarray by asanyarray.

one last related one
What's the inverse of a covariance matrix? It's just sum, multiplication and division (which I wouldn't remember), but for the computation is just np.linalg.inv or np.linalg.pinv which is a simple shortcut.


That said, there is truly an enormous benefit for checking units on
"regular" operations. Spacecraft have missed Mars because people
didn't do it properly...



All the best,


p.s. The scipy functions should indeed be included in the ufuncs
covered; there is a fairly long-standing issue for that in astropy...
NumPy-Discussion mailing list