
When you try to transpose a 1D array, it does nothing. This is the correct behavior, since it transposing a 1D array is meaningless. However, this can often lead to unexpected errors since this is rarely what you want. You can convert the array to 2D, using `np.atleast_2d` or `arr[None]`, but this makes simple linear algebra computations more difficult. I propose adding an argument to transpose, perhaps called `expand` or `expanddim`, which if `True` (it is `False` by default) will force the array to be at least 2D. A shortcut property, `ndarray.T2`, would be the same as `ndarray.transpose(True)`.

On Tue, Apr 5, 2016 at 7:11 PM, Todd <toddrjen@gmail.com> wrote:
An alternative that was mentioned in the bug tracker (https://github.com/numpy/numpy/issues/7495), possibly by me, would be to have arr.T2 act as a stacked-transpose operator, i.e. treat an arr with shape (..., n, m) as being a (...)-shaped stack of (n, m) matrices, and transpose each of those matrices, so the output shape is (..., m, n). And since this operation intrinsically acts on arrays with shape (..., n, m) then trying to apply it to a 0d or 1d array would be an error. -n -- Nathaniel J. Smith -- https://vorpus.org

Nathaniel Smith <njs <at> pobox.com> writes:
I think that the problem is not that it doesn't raise an error for 1D array, but that it doesn't do anything useful to 1D arrays. Raising an error would change nothing to the way transpose is used now. For a 1D array a of shape (N,), I expect a.T2 to be of shape (N, 1), which is useful when writing formulas, and clearer that a[None].T. Actually I'd like a.T to do that alreadu, but I guess backward compatibility is more important.

No, but it would make it clear that you can't expect transpose to make a 1D array into a2D array.
For a 1D array a of shape (N,), I expect a.T2 to be of shape (N, 1),
Why not (1,N)? -- it is not well defined, though I suppose it's not so bad to establish a convention that a 1-D array is a "row vector" rather than a "column vector". But the truth is that Numpy arrays are arrays, not matrices and vectors. The "right" way to do this is to properly extend and support the matrix object, adding row and column vector objects, and then it would be clear. But while there has been a lot of discussion about that in the past, the fact is that no one wants it bad enough to write the code. So I think it's better to keep Numpy arrays "pure", and if you want to change the rank of an array, you do so explicitly. I use: A_vector.shape = (-1,1) BTW, if transposing a (N,) array gives you a (N,1) array, what does transposing a (N,1) array give you? (1,N) or (N,) ? -CHB

On Wed, Apr 6, 2016 at 11:44 AM, Chris Barker - NOAA Federal < chris.barker@noaa.gov> wrote:
I think that cat is already out of the bag. As long as you can do matrix multiplication on arrays using the @ operator, I think they aren't really "pure" anymore.
My suggestion is that this explicitly increases the number of dimensions to at least 2. The result will always have at least 2 dimensions. So 0D -> 2D, 1D -> 2D, 2D -> 2D, 3D -> 3D, 4D -> 4D, etc. So this would be equivalent to the existing `atleast_2d` function.

On 4/6/2016 1:47 PM, Todd wrote:
I truly hope nothing is done like this. But underlying the proposal is apparently the idea that there be an attribute equivalent to `atleast_2d`. Then call it `d2p`. You can now have `a.d2p.T` which is a lot more explicit and general than say `a.T2`, while requiring only 3 more keystrokes. (It's still horribly ugly, though, and I hope this too is dismissed.) Alan Isaac

On Wed, Apr 6, 2016 at 10:47 AM, Todd <toddrjen@gmail.com> wrote:
not really -- you still need to use arrays that are the "correct" shape. Ideally, a row vector is (1, N) and a column vector is (N,1). Though I know there are places that a 1-D array is treated as a column vector.
my point is that for 2D arrays: arr.T.T == arr, but in this case, we would be making a one way street: when you transpose a 1D array, you treat it as a row vector, and return a "column vector" -- a (N,1) array. But when you transpose a "column vector" to get a row vector, you get a (1,N) array, not a (N) array. So I think we need to either have proper row and column vectors (to go with matrices) or require people to create the appropriate 2D arrays. Perhaps there should be an easier more obvious way to spell "make this a column vector", but I don't think .T is it. Though arr.shape = (-1,1) has always worked fine for me. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Tue, Apr 5, 2016 at 9:14 PM Nathaniel Smith <njs@pobox.com> wrote:
I agree that we could really use a shorter syntax for a broadcasting transpose. Swapaxes is far too verbose for something that should be so common now that we've introduced the new matmul operator. That said, the fact that 1-D vectors are conceptually so similar to row vectors makes transposing a 1-D array a potential pitfall for a lot of people. When broadcasting along the leading dimension, a (n) shaped array and a (1, n) shaped array are already treated as equivalent. Treating a 1-D array like a row vector for transposes seems like a reasonable way to make things more intuitive for users. Rather than raising an error for arrays with fewer than two dimensions, the new syntax could be made equivalent to np.swapaxes(np.atleast2d(arr), -1, -2). From the standpoint of broadcasting semantics, using atleast2d can be viewed as allowing broadcasting along the inner dimensions. Though that's not a common thing, at least there's a precedent. The only downside I can see with allowing T2 to call atleast2d is that it would make things like A @ b and A @ b.T2 equivalent when B is one-dimensional. That's already the case with our current syntax though. There's some inherent design tension between the fact that broadcasting usually prepends ones to fill in missing dimensions and the fact that our current linear algebra semantics often treat rows as columns, but making 1-D arrays into rows makes a lot of sense as far as user experience goes. Great ideas everyone! Best, -Ian Henriksen

On Tue, Apr 5, 2016 at 11:14 PM, Nathaniel Smith <njs@pobox.com> wrote:
My intention was to make linear algebra operations easier in numpy. With the @ operator available, it is now very easy to do basic linear algebra on arrays without needing the matrix class. But getting an array into a state where you can use the @ operator effectively is currently pretty verbose and confusing. I was trying to find a way to make the @ operator more useful.

On Wed, Apr 6, 2016 at 10:43 AM, Todd <toddrjen@gmail.com> wrote:
Can you elaborate on what you're doing that you find verbose and confusing, maybe paste an example? I've never had any trouble like this doing linear algebra with @ or dot (which have similar semantics for 1d arrays), which is probably just because I've had different use cases, but it's much easier to talk about these things with a concrete example in front of us to put everyone on the same page. -n -- Nathaniel J. Smith -- https://vorpus.org

On Wed, Apr 6, 2016 at 5:20 PM, Nathaniel Smith <njs@pobox.com> wrote:
Let's say you want to do a simple matrix multiplication example. You create two example arrays like so: a = np.arange(20) b = np.arange(10, 50, 10) Now you want to do a.T @ b First you need to turn a into a 2D array. I can think of 10 ways to do this off the top of my head, and there may be more: 1a) a[:, None] 1b) a[None] 1c) a[None, :] 2a) a.shape = (1, -1) 2b) a.shape = (-1, 1) 3a) a.reshape(1, -1) 3b) a.reshape(-1, 1) 4a) np.reshape(a, (1, -1)) 4b) np.reshape(a, (-1, 1)) 5) np.atleast_2d(a) 5 is pretty clear, and will work fine with any number of dimensions, but is also long to type out when trying to do a simple example. The different variants of 1, 2, 3, and 4, however, will only work with 1D arrays (making them less useful for functions), are not immediately obvious to me what the result will be (I always need to try it to make sure the result is what I expect), and are easy to get mixed up in my opinion. They also require people keep a mental list of lots of ways to do what should be a very simple task. Basically, my argument here is the same as the argument from pep465 for the inclusion of the @ operator: https://www.python.org/dev/peps/pep-0465/#transparent-syntax-is-especially-c... "A large proportion of scientific code is written by people who are experts in their domain, but are not experts in programming. And there are many university courses run each year with titles like "Data analysis for social scientists" which assume no programming background, and teach some combination of mathematical techniques, introduction to programming, and the use of programming to implement these mathematical techniques, all within a 10-15 week period. These courses are more and more often being taught in Python rather than special-purpose languages like R or Matlab. For these kinds of users, whose programming knowledge is fragile, the existence of a transparent mapping between formulas and code often means the difference between succeeding and failing to write that code at all."

On Thu, Apr 7, 2016 at 11:13 AM, Todd <toddrjen@gmail.com> wrote:
This doesn't work because of the ambiguity between column and row vector. In most cases 1d vectors in statistics/econometrics are column vectors. Sometime it takes me a long time to figure out whether an author uses row or column vector for transpose. i.e. I often need x.T dot y which works for 1d and 2d to produce inner product. but the outer product would require most of the time a column vector so it's defined as x dot x.T. I think keeping around explicitly 2d arrays if necessary is less error prone and confusing. But I wouldn't mind a shortcut for atleast_2d (although more often I need atleast_2dcol to translate formulas) Josef

On Thu, Apr 7, 2016 at 11:35 AM, <josef.pktd@gmail.com> wrote:
At least from what I have seen, in all cases in numpy where a 1D array is treated as a 2D array, it is always treated as a row vector, the examples I can think of being atleast_2d, hstack, vstack, and dstack. So using this convention would be in line with how it is used elsewhere in numpy.

On Thu, Apr 7, 2016 at 11:42 AM, Todd <toddrjen@gmail.com> wrote:
AFAIK, linear algebra works differently, 1-D is special
yy[:4].dot(xx) array([70, 76, 82, 88, 94])
np.__version__ '1.6.1'
I don't think numpy treats 1d arrays as row vectors. numpy has C-order for axis preference which coincides in many cases with row vector behavior.
It's not an uncommon exception for me. Josef

On Do, 2016-04-07 at 11:56 -0400, josef.pktd@gmail.com wrote:
<snip>
Well, broadcasting rules, are that (n,) should typically behave similar to (1, n). However, for dot/matmul and @ the rules are stretched to mean "the one dimensional thing that gives an inner product" (using matmul since my python has no @ yet): In [12]: a = np.arange(20) In [13]: b = np.arange(20) In [14]: np.matmul(a, b) Out[14]: 2470 In [15]: np.matmul(a, b[:, None]) Out[15]: array([2470]) In [16]: np.matmul(a[None, :], b) Out[16]: array([2470]) In [17]: np.matmul(a[None, :], b[:, None]) Out[17]: array([[2470]]) which indeed gives us a fun thing, because if you look at the last line, the outer product equivalent would be: outer = np.matmul(a[None, :].T, b[:, None].T) Now if I go back to the earlier example: a.T @ b Does not achieve the outer product at all with using T2, since a.T2 @ b.T2 # only correct for a, but not for b a.T2 @ b # b attempts to be "inner", so does not work It almost seems to me that the example is a counter example, because on first sight the `T2` attribute would still leave you with no shorthand for `b`. I understand the pain of having to write (and parse get into the depth of) things like `arr[:, np.newaxis]` or reshape. I also understand the idea of a shorthand for vectorized matrix operations. That is, an argument for a T2 attribute which errors on 1D arrays (not sure I like it, but that is a different issue). However, it seems that implicit adding of an axis which only works half the time does not help too much? I have to admit I don't write these things too much, but I wonder if it would not help more if we just provided some better information/link to longer examples in the "dimension mismatch" error message? In the end it is quite simple, as Nathaniel, I think I would like to see some example code, where the code obviously looks easier then before? With the `@` operator that was the case, with the "dimension adding logic" I am not so sure, plus it seems it may add other pitfalls. - Sebastian

On Do, 2016-04-07 at 13:29 -0400, josef.pktd@gmail.com wrote:
Actually, better would be: a.T2 @ b.T2.T2 # Aha? And true enough, that works, but is it still reasonably easy to find and understand? Or is it just frickeling around, the same as you would try `a[:, None]` before finding `a[None, :]`, maybe worse? - Sebastian

On Thu, Apr 7, 2016 at 8:13 AM, Todd <toddrjen@gmail.com> wrote:
Basically, my argument here is the same as the argument from pep465 for the
column vector, but I don't think overloading transpose is the way to do that.
-CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Wed, Apr 6, 2016 at 3:21 PM Nathaniel Smith <njs@pobox.com> wrote:
Here's another example that I've seen catch people now and again. A = np.random.rand(100, 100) b = np.random.rand(10) A * b.T In this case the user pretty clearly meant to be broadcasting along the rows of A rather than along the columns, but the code fails silently. When an issue like this gets mixed into a larger series of broadcasting operations, the error becomes difficult to find. This error isn't necessarily unique to beginners either. It's a common typo that catches intermediate users who know about broadcasting semantics but weren't keeping close enough track of the dimensionality of the different intermediate expressions in their code. Best, -Ian Henriksen

On Thu, Apr 7, 2016 at 10:00 AM, Ian Henriksen < insertinterestingnamehere@gmail.com> wrote:
typo? that was supposed to be b = np.random.rand(100). yes? This is exactly what someone else referred to as the expectations of someone that comes from MATLAB, and doesn't yet "get" that 1D arrays are 1D arrays. All of this is EXACTLY the motivation for the matric class -- which never took off, and was never complete (it needed a row and column vector implementation, if you ask me. But Ithikn the reason it didn't take off is that it really isn't that useful, but is different enough from regular arrays to be a greater source of confusion. And it was decided that all people REALLY wanted was an obviou sway to get matric multiply, which we now have with @. So this discussion brings up that we also need an easy an obvious way to make a column vector -- maybe: np.col_vector(arr) which would be a synonym for np.reshape(arr, (-1,1)) would that make anyone happy? NOTE: having transposing a 1D array raise an exception would help remove a lot of the confusion, but it may be too late for that.... In this case the user pretty clearly meant to be broadcasting along the
rows of A rather than along the columns, but the code fails silently.
hence the exception idea.... maybe a warning? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Thu, Apr 7, 2016 at 2:17 PM, Chris Barker <chris.barker@noaa.gov> wrote:
AFAIR, there is a lot of code that works correctly with .T being a noop for 1D e.g. covariance matrix/inner product x.T dot y as mentioned before. write unit tests with non square 2d arrays and the exception / test error shows up fast. Josef

On Thu, 7 Apr 2016 14:31:17 -0400, josef.pktd@gmail.com wrote:
FWIW I would give a +1e42 to something like np.colvect and np.rowvect (or whatever variant of these names). This is human readable and does not break anything, it's just an explicit shortcut to reshape/atleast_2d/etc. Regards.

On Thu, Apr 7, 2016 at 3:26 PM, Ian Henriksen < insertinterestingnamehere@gmail.com> wrote:
The current behavior is perfectly well defined, and I don't want a lot of warnings showing up because .T works suddenly only for ndim != 1. I make lots of mistakes during programming. But shape mismatch are usually very fast to catch. If you want safe programming, then force everyone to use only 2-D like in matlab. It would have prevented me from making many mistakes.
np.array(1).T array(1)
another noop. Why doesn't it convert it to 2d? Josef

On Thu, Apr 7, 2016 at 1:53 PM <josef.pktd@gmail.com> wrote:
I think we've misunderstood each other. Sorry if I was unclear. As I've understood the discussion thus far, "raising an error" refers to raising an error when a 1D array is passed used with the syntax a.T2 (for swapping the last two dimensions?). As far as whether or not a.T should raise an error for 1D arrays, that ship has definitely already sailed. I'm making the case that there's value in having an abbreviated syntax that helps prevent errors from accidentally using a 1D array, not that we should change the existing semantics. Cheers, -Ian

On Thu, Apr 7, 2016 at 11:31 AM, <josef.pktd@gmail.com> wrote:
oh well, then no warning, either.
write unit tests with non square 2d arrays and the exception / test error shows up fast.
Guido wrote a note to python-ideas about the conflict between the use cases of "scripting" and "large system development" -- he urged both camps, to respect and listen to each other. I think this is very much a "scripters" issue -- so no unit tests, etc.... For my part, I STILL have to kick myself once in a while for using square arrays in testing/exploration! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Thu, Apr 7, 2016 at 12:18 PM Chris Barker <chris.barker@noaa.gov> wrote:
Hahaha, thanks, yes, in describing a common typo I demonstrated another one. At least this one doesn't fail silently.
Most of the cases I've seen this error have come from people unfamiliar with matlab who, like I said, weren't tracking dimensions quite as carefully as they should have. That said, it's just anecdotal evidence. I wouldn't be at all surprised if this were an issue for matlab users as well. As far as the matrix class goes, we really shouldn't be telling anyone to use that anymore.
Yep. An exception may be the best way forward here. My biggest objection is that the current semantics make it easy for people to silently get unintended behavior.
maybe a warning?
-CHB
-Ian Henriksen

On 7 April 2016 at 11:17, Chris Barker <chris.barker@noaa.gov> wrote:
I'm curious to see use cases where this doesn't solve the problem. The most common operations that I run into: colvec = lambda x: np.c_[x] x = np.array([1, 2, 3]) A = np.arange(9).reshape((3, 3)) 1) x @ x (equivalent to x @ colvec(x)) 2) A @ x (equivalent to A @ colvec(x), apart from the shape) 3) x @ A 4) x @ colvec(x) -- gives an error, but perhaps this should work and be equivalent to np.dot(colvec(x), rowvec(x)) ? If (4) were changed, 1D arrays would mostly* be interpreted as row vectors, and there would be no need for a rowvec function. And we already do that kind of magic for (2). Stéfan * not for special case (1)

On Fri, Apr 8, 2016 at 9:59 AM, Charles R Harris <charlesr.harris@gmail.com> wrote:
I don't follow this. wouldn't it ony be an issue for 1D arrays, rather than the "last index". Or maybe I'm totally missing the point. But anyway, are (N,1) and (1, N) arrays insufficient for representing column and row vectors for some reason? If not -- then we have a way to express a column or row vector, we just need an easier and more obvious way to create them. *maybe* we could have actual column and row vector classes -- they would BE regular arrays, with (1,N) or (N,1) dimensions, and act the same in every way except their __repr__. and we're provide handy factor functions for them. These were needed to complete the old Matrix class -- which is no longer needed now that we have @ (i.e. a 2D array IS a matrix) Note: this is not very well thought out! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Fri, Apr 8, 2016 at 5:11 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
if a is 1d or twod vcol, and vrow and vcol could also be 2d arrays (not a single row or col) this is just a part of a long linear algebra expression 1d dot 1d is different from vrow dot vcol A dot 1d is different from A dot vcol. There intentional differences in the linear algebra behavior of 1d versus a col or row vector. One of those is dropping the extra dimension. We are using this a lot to switch between 1-d and 2-d cases. And another great thing about numpy is that often code immediately generalizes from 1-d to 2d with just some tiny adjustments. (I haven't played with @ yet) I worry that making the 1-d arrays suddenly behave ambiguously as weird 1-d/2-d mixture will make code more inconsistent and more difficult to follow. shortcuts and variations of atleast_2d sound fine, but not implicitly Josef

On Thu, Apr 7, 2016 at 4:04 PM Stéfan van der Walt <stefanv@berkeley.edu> wrote:
Thinking this over a bit more, I think a broadcasting transpose that errors out on arrays that are less than 2D would cover the use cases of which I'm aware. The biggest things to me are having a broadcasting 2D transpose and having some form of transpose that doesn't silently pass 1D arrays through unchanged. Adding properties like colvec and rowvec has less bearing on the use cases I'm aware of, but they both provide nice syntax sugar for common reshape operations. It seems like it would cover all the needed cases for simplifying expressions involving matrix multiplication. It's not totally clear what the semantics should be for higher dimensional arrays or 2D arrays that already have a shape incompatible with the one desired. Best, -Ian Henriksen

On 4/8/2016 4:28 PM, Ian Henriksen wrote:
The biggest things to me are having a broadcasting 2D transpose and having some form of transpose that doesn't silently pass 1D arrays through unchanged.
This comment, like much of this thread, seems to long for the matrix class but not want to actually use it. It seems pretty simple to me: if you want everything forced to 2d, always use the matrix class. If you want to use arrays, they work nicely now, and they work as expected once you understand what you are working with. (I.e., *not* matrices.) Btw, numpy.outer(a, b) produces an outer product. This may be off topic, but it seemed to me that some of the discussion overlooks this. I suggest that anyone who thinks numpy is falling short in this area point out how Mma has addressed this shortcoming. Wolfram will never be accused of a reluctance to add functions when there is a perceived need ... Cheers, Alan Isaac

On Fri, Apr 8, 2016 at 2:09 PM, Alan Isaac <alan.isaac@gmail.com> wrote:
Note the word "broadcasting" -- he doesn't want 2d matrices, he wants tools that make it easy to work with stacks of 2d matrices stored in 2-or-more-dimensional arrays. -n -- Nathaniel J. Smith -- https://vorpus.org

On Fri, Apr 8, 2016 at 4:04 PM Alan Isaac <alan.isaac@gmail.com> wrote:
Sorry if there's any misunderstanding here. Map doesn't really help much. That'd only be good for dealing with three dimensional cases and you'd get a list of arrays, not a view with the appropriate axes swapped. np.einsum('...ji', a) np.swapaxes(a, -1, -2) np.rollaxis(a, -1, -2) all do the right thing, but they are all fairly verbose for such a simple operation. Here's a simple example of when such a thing would be useful. With 2D arrays you can write a.dot(b.T) If you want to have that same operation follow the existing gufunc broadcasting semantics you end up having to write one of the following np.einsum('...ij,...kj', a, b) a @ np.swapaxes(a, -1, -2) a @ np.rollaxis(a, -1, -2) None of those are very concise, and, when I look at them, I don't usually think "that does a.dot(b.T)." If we introduced the T2 syntax, this would be valid: a @ b.T2 It makes the intent much clearer. This helps readability even more when you're trying to put together something that follows a larger equation while still broadcasting correctly. Does this help make the use cases a bit clearer? Best, -Ian Henriksen

On Fri, Apr 8, 2016 at 4:37 PM, Ian Henriksen < insertinterestingnamehere@gmail.com> wrote:
would: a @ colvector(b) work too? or is T2 generalized to more than one column? (though I suppose colvector() could support more than one column also -- weird though that might be.) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Apr 11, 2016 at 5:24 PM Chris Barker <chris.barker@noaa.gov> wrote:
Right, so I've opted to withdraw my support for having the T2 syntax prepend dimensions when the array has fewer than two dimensions. Erroring out in the 1D case addresses my concerns well enough. The colvec/rowvec idea seems nice too, but it matters a bit less to me, so I'll leave that discussion open for others to follow up on. Having T2 be a broadcasting transpose is a bit more general than any semantics for rowvec/colvec that I can think of. Here are specific arrays that, in the a @ b.T2 can only be handled using some sort of transpose: a = np.random.rand(2, 3, 4) b = np.random.rand(2, 1, 3, 4) Using these inputs, the expression a @ b.T2 would have the shape (2, 2, 3, 3). All the T2 property would be doing is a transpose that has similar broadcasting semantics to matmul, solve, inv, and the other gufuncs. The primary difference with those other functions is that transposes would be done as views whereas the other operations, because of the computations they perform, all have to return new output arrays. Hope this helps, -Ian Henriksen

On Thu, Apr 7, 2016 at 10:00 AM, Ian Henriksen <insertinterestingnamehere@gmail.com> wrote:
I feel like this is an argument for named axes, and broadcasting rules that respect those names, as in xarray? There's been some speculative discussion about adding something along these lines to numpy, though nothing that's even reached the half-baked stage. -n -- Nathaniel J. Smith -- https://vorpus.org

On 06/04/2016 04:11, Todd wrote:
Hello, My two cents here, I've seen hundreds of people (literally hundreds) stumbling on this .T trick with 1D vectors when they were trying to do some linear algebra with numpy so at first I had the same feeling as you. But the real issue was that *all* these people were coming from matlab and expected numpy to behave the same way. Once the logic behind 1D vectors was explained it made sense to most of them and there were no more problems. And by the way I don't see any way to tell apart a 1D "row vector" from a 1D "column vector", think of a code mixing a Rn=>R jacobian matrix and some data supposed to be used as measurements in a linear system, so we have J=np.array([1,2,3,4]) and B=np.array([5,6,7,8]), what would the output of J.T2 and B.T2 be ? I think it's much better to get used to writing J=np.array([1,2,3,4]).reshape(1,4) and B=np.array([5,6,7,8]).reshape(4,1), then you can use .T and @ without any verbosity and at least if forces users (read "my students" here) to think twice before writing some linear algebra nonsense. Regards.

On Thu, Apr 7, 2016 at 3:39 AM, Irvin Probst <irvin.probst@ensta-bretagne.fr
wrote:
The problem isn't necessarily understanding, although that is a problem. The bigger problem is having to jump through hoops to do basic matrix math.
As I said elsewhere, we already have a convention for this established by `np.atleast_2d`. 1D arrays are treated as row vectors. `np.hstack` and `np.vstack` also treat 1D arrays as row vectors. So `arr.T2` will follow this convention, being equivalent to `np.atleast_2d(arr).T`.
That works okay when you know beforehand what the shape of the array is (although it may very well be the different between a simple, 1-line piece of code and a 3-line piece of code). But what if you try to turn this into a general-purpose function? Then any function that has linear algebra needs to call `atleast_2d` on every value used in that linear algebra, or use `if` tests. And if you forget, it may not be obvious until much later depending on what you initially use the function for and what you use it for later.

On Tue, Apr 5, 2016 at 7:11 PM, Todd <toddrjen@gmail.com> wrote:
An alternative that was mentioned in the bug tracker (https://github.com/numpy/numpy/issues/7495), possibly by me, would be to have arr.T2 act as a stacked-transpose operator, i.e. treat an arr with shape (..., n, m) as being a (...)-shaped stack of (n, m) matrices, and transpose each of those matrices, so the output shape is (..., m, n). And since this operation intrinsically acts on arrays with shape (..., n, m) then trying to apply it to a 0d or 1d array would be an error. -n -- Nathaniel J. Smith -- https://vorpus.org

Nathaniel Smith <njs <at> pobox.com> writes:
I think that the problem is not that it doesn't raise an error for 1D array, but that it doesn't do anything useful to 1D arrays. Raising an error would change nothing to the way transpose is used now. For a 1D array a of shape (N,), I expect a.T2 to be of shape (N, 1), which is useful when writing formulas, and clearer that a[None].T. Actually I'd like a.T to do that alreadu, but I guess backward compatibility is more important.

No, but it would make it clear that you can't expect transpose to make a 1D array into a2D array.
For a 1D array a of shape (N,), I expect a.T2 to be of shape (N, 1),
Why not (1,N)? -- it is not well defined, though I suppose it's not so bad to establish a convention that a 1-D array is a "row vector" rather than a "column vector". But the truth is that Numpy arrays are arrays, not matrices and vectors. The "right" way to do this is to properly extend and support the matrix object, adding row and column vector objects, and then it would be clear. But while there has been a lot of discussion about that in the past, the fact is that no one wants it bad enough to write the code. So I think it's better to keep Numpy arrays "pure", and if you want to change the rank of an array, you do so explicitly. I use: A_vector.shape = (-1,1) BTW, if transposing a (N,) array gives you a (N,1) array, what does transposing a (N,1) array give you? (1,N) or (N,) ? -CHB

On Wed, Apr 6, 2016 at 11:44 AM, Chris Barker - NOAA Federal < chris.barker@noaa.gov> wrote:
I think that cat is already out of the bag. As long as you can do matrix multiplication on arrays using the @ operator, I think they aren't really "pure" anymore.
My suggestion is that this explicitly increases the number of dimensions to at least 2. The result will always have at least 2 dimensions. So 0D -> 2D, 1D -> 2D, 2D -> 2D, 3D -> 3D, 4D -> 4D, etc. So this would be equivalent to the existing `atleast_2d` function.

On 4/6/2016 1:47 PM, Todd wrote:
I truly hope nothing is done like this. But underlying the proposal is apparently the idea that there be an attribute equivalent to `atleast_2d`. Then call it `d2p`. You can now have `a.d2p.T` which is a lot more explicit and general than say `a.T2`, while requiring only 3 more keystrokes. (It's still horribly ugly, though, and I hope this too is dismissed.) Alan Isaac

On Wed, Apr 6, 2016 at 10:47 AM, Todd <toddrjen@gmail.com> wrote:
not really -- you still need to use arrays that are the "correct" shape. Ideally, a row vector is (1, N) and a column vector is (N,1). Though I know there are places that a 1-D array is treated as a column vector.
my point is that for 2D arrays: arr.T.T == arr, but in this case, we would be making a one way street: when you transpose a 1D array, you treat it as a row vector, and return a "column vector" -- a (N,1) array. But when you transpose a "column vector" to get a row vector, you get a (1,N) array, not a (N) array. So I think we need to either have proper row and column vectors (to go with matrices) or require people to create the appropriate 2D arrays. Perhaps there should be an easier more obvious way to spell "make this a column vector", but I don't think .T is it. Though arr.shape = (-1,1) has always worked fine for me. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Tue, Apr 5, 2016 at 9:14 PM Nathaniel Smith <njs@pobox.com> wrote:
I agree that we could really use a shorter syntax for a broadcasting transpose. Swapaxes is far too verbose for something that should be so common now that we've introduced the new matmul operator. That said, the fact that 1-D vectors are conceptually so similar to row vectors makes transposing a 1-D array a potential pitfall for a lot of people. When broadcasting along the leading dimension, a (n) shaped array and a (1, n) shaped array are already treated as equivalent. Treating a 1-D array like a row vector for transposes seems like a reasonable way to make things more intuitive for users. Rather than raising an error for arrays with fewer than two dimensions, the new syntax could be made equivalent to np.swapaxes(np.atleast2d(arr), -1, -2). From the standpoint of broadcasting semantics, using atleast2d can be viewed as allowing broadcasting along the inner dimensions. Though that's not a common thing, at least there's a precedent. The only downside I can see with allowing T2 to call atleast2d is that it would make things like A @ b and A @ b.T2 equivalent when B is one-dimensional. That's already the case with our current syntax though. There's some inherent design tension between the fact that broadcasting usually prepends ones to fill in missing dimensions and the fact that our current linear algebra semantics often treat rows as columns, but making 1-D arrays into rows makes a lot of sense as far as user experience goes. Great ideas everyone! Best, -Ian Henriksen

On Tue, Apr 5, 2016 at 11:14 PM, Nathaniel Smith <njs@pobox.com> wrote:
My intention was to make linear algebra operations easier in numpy. With the @ operator available, it is now very easy to do basic linear algebra on arrays without needing the matrix class. But getting an array into a state where you can use the @ operator effectively is currently pretty verbose and confusing. I was trying to find a way to make the @ operator more useful.

On Wed, Apr 6, 2016 at 10:43 AM, Todd <toddrjen@gmail.com> wrote:
Can you elaborate on what you're doing that you find verbose and confusing, maybe paste an example? I've never had any trouble like this doing linear algebra with @ or dot (which have similar semantics for 1d arrays), which is probably just because I've had different use cases, but it's much easier to talk about these things with a concrete example in front of us to put everyone on the same page. -n -- Nathaniel J. Smith -- https://vorpus.org

On Wed, Apr 6, 2016 at 5:20 PM, Nathaniel Smith <njs@pobox.com> wrote:
Let's say you want to do a simple matrix multiplication example. You create two example arrays like so: a = np.arange(20) b = np.arange(10, 50, 10) Now you want to do a.T @ b First you need to turn a into a 2D array. I can think of 10 ways to do this off the top of my head, and there may be more: 1a) a[:, None] 1b) a[None] 1c) a[None, :] 2a) a.shape = (1, -1) 2b) a.shape = (-1, 1) 3a) a.reshape(1, -1) 3b) a.reshape(-1, 1) 4a) np.reshape(a, (1, -1)) 4b) np.reshape(a, (-1, 1)) 5) np.atleast_2d(a) 5 is pretty clear, and will work fine with any number of dimensions, but is also long to type out when trying to do a simple example. The different variants of 1, 2, 3, and 4, however, will only work with 1D arrays (making them less useful for functions), are not immediately obvious to me what the result will be (I always need to try it to make sure the result is what I expect), and are easy to get mixed up in my opinion. They also require people keep a mental list of lots of ways to do what should be a very simple task. Basically, my argument here is the same as the argument from pep465 for the inclusion of the @ operator: https://www.python.org/dev/peps/pep-0465/#transparent-syntax-is-especially-c... "A large proportion of scientific code is written by people who are experts in their domain, but are not experts in programming. And there are many university courses run each year with titles like "Data analysis for social scientists" which assume no programming background, and teach some combination of mathematical techniques, introduction to programming, and the use of programming to implement these mathematical techniques, all within a 10-15 week period. These courses are more and more often being taught in Python rather than special-purpose languages like R or Matlab. For these kinds of users, whose programming knowledge is fragile, the existence of a transparent mapping between formulas and code often means the difference between succeeding and failing to write that code at all."

On Thu, Apr 7, 2016 at 11:13 AM, Todd <toddrjen@gmail.com> wrote:
This doesn't work because of the ambiguity between column and row vector. In most cases 1d vectors in statistics/econometrics are column vectors. Sometime it takes me a long time to figure out whether an author uses row or column vector for transpose. i.e. I often need x.T dot y which works for 1d and 2d to produce inner product. but the outer product would require most of the time a column vector so it's defined as x dot x.T. I think keeping around explicitly 2d arrays if necessary is less error prone and confusing. But I wouldn't mind a shortcut for atleast_2d (although more often I need atleast_2dcol to translate formulas) Josef

On Thu, Apr 7, 2016 at 11:35 AM, <josef.pktd@gmail.com> wrote:
At least from what I have seen, in all cases in numpy where a 1D array is treated as a 2D array, it is always treated as a row vector, the examples I can think of being atleast_2d, hstack, vstack, and dstack. So using this convention would be in line with how it is used elsewhere in numpy.

On Thu, Apr 7, 2016 at 11:42 AM, Todd <toddrjen@gmail.com> wrote:
AFAIK, linear algebra works differently, 1-D is special
yy[:4].dot(xx) array([70, 76, 82, 88, 94])
np.__version__ '1.6.1'
I don't think numpy treats 1d arrays as row vectors. numpy has C-order for axis preference which coincides in many cases with row vector behavior.
It's not an uncommon exception for me. Josef

On Do, 2016-04-07 at 11:56 -0400, josef.pktd@gmail.com wrote:
<snip>
Well, broadcasting rules, are that (n,) should typically behave similar to (1, n). However, for dot/matmul and @ the rules are stretched to mean "the one dimensional thing that gives an inner product" (using matmul since my python has no @ yet): In [12]: a = np.arange(20) In [13]: b = np.arange(20) In [14]: np.matmul(a, b) Out[14]: 2470 In [15]: np.matmul(a, b[:, None]) Out[15]: array([2470]) In [16]: np.matmul(a[None, :], b) Out[16]: array([2470]) In [17]: np.matmul(a[None, :], b[:, None]) Out[17]: array([[2470]]) which indeed gives us a fun thing, because if you look at the last line, the outer product equivalent would be: outer = np.matmul(a[None, :].T, b[:, None].T) Now if I go back to the earlier example: a.T @ b Does not achieve the outer product at all with using T2, since a.T2 @ b.T2 # only correct for a, but not for b a.T2 @ b # b attempts to be "inner", so does not work It almost seems to me that the example is a counter example, because on first sight the `T2` attribute would still leave you with no shorthand for `b`. I understand the pain of having to write (and parse get into the depth of) things like `arr[:, np.newaxis]` or reshape. I also understand the idea of a shorthand for vectorized matrix operations. That is, an argument for a T2 attribute which errors on 1D arrays (not sure I like it, but that is a different issue). However, it seems that implicit adding of an axis which only works half the time does not help too much? I have to admit I don't write these things too much, but I wonder if it would not help more if we just provided some better information/link to longer examples in the "dimension mismatch" error message? In the end it is quite simple, as Nathaniel, I think I would like to see some example code, where the code obviously looks easier then before? With the `@` operator that was the case, with the "dimension adding logic" I am not so sure, plus it seems it may add other pitfalls. - Sebastian

On Do, 2016-04-07 at 13:29 -0400, josef.pktd@gmail.com wrote:
Actually, better would be: a.T2 @ b.T2.T2 # Aha? And true enough, that works, but is it still reasonably easy to find and understand? Or is it just frickeling around, the same as you would try `a[:, None]` before finding `a[None, :]`, maybe worse? - Sebastian

On Thu, Apr 7, 2016 at 8:13 AM, Todd <toddrjen@gmail.com> wrote:
Basically, my argument here is the same as the argument from pep465 for the
column vector, but I don't think overloading transpose is the way to do that.
-CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Wed, Apr 6, 2016 at 3:21 PM Nathaniel Smith <njs@pobox.com> wrote:
Here's another example that I've seen catch people now and again. A = np.random.rand(100, 100) b = np.random.rand(10) A * b.T In this case the user pretty clearly meant to be broadcasting along the rows of A rather than along the columns, but the code fails silently. When an issue like this gets mixed into a larger series of broadcasting operations, the error becomes difficult to find. This error isn't necessarily unique to beginners either. It's a common typo that catches intermediate users who know about broadcasting semantics but weren't keeping close enough track of the dimensionality of the different intermediate expressions in their code. Best, -Ian Henriksen

On Thu, Apr 7, 2016 at 10:00 AM, Ian Henriksen < insertinterestingnamehere@gmail.com> wrote:
typo? that was supposed to be b = np.random.rand(100). yes? This is exactly what someone else referred to as the expectations of someone that comes from MATLAB, and doesn't yet "get" that 1D arrays are 1D arrays. All of this is EXACTLY the motivation for the matric class -- which never took off, and was never complete (it needed a row and column vector implementation, if you ask me. But Ithikn the reason it didn't take off is that it really isn't that useful, but is different enough from regular arrays to be a greater source of confusion. And it was decided that all people REALLY wanted was an obviou sway to get matric multiply, which we now have with @. So this discussion brings up that we also need an easy an obvious way to make a column vector -- maybe: np.col_vector(arr) which would be a synonym for np.reshape(arr, (-1,1)) would that make anyone happy? NOTE: having transposing a 1D array raise an exception would help remove a lot of the confusion, but it may be too late for that.... In this case the user pretty clearly meant to be broadcasting along the
rows of A rather than along the columns, but the code fails silently.
hence the exception idea.... maybe a warning? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Thu, Apr 7, 2016 at 2:17 PM, Chris Barker <chris.barker@noaa.gov> wrote:
AFAIR, there is a lot of code that works correctly with .T being a noop for 1D e.g. covariance matrix/inner product x.T dot y as mentioned before. write unit tests with non square 2d arrays and the exception / test error shows up fast. Josef

On Thu, 7 Apr 2016 14:31:17 -0400, josef.pktd@gmail.com wrote:
FWIW I would give a +1e42 to something like np.colvect and np.rowvect (or whatever variant of these names). This is human readable and does not break anything, it's just an explicit shortcut to reshape/atleast_2d/etc. Regards.

On Thu, Apr 7, 2016 at 3:26 PM, Ian Henriksen < insertinterestingnamehere@gmail.com> wrote:
The current behavior is perfectly well defined, and I don't want a lot of warnings showing up because .T works suddenly only for ndim != 1. I make lots of mistakes during programming. But shape mismatch are usually very fast to catch. If you want safe programming, then force everyone to use only 2-D like in matlab. It would have prevented me from making many mistakes.
np.array(1).T array(1)
another noop. Why doesn't it convert it to 2d? Josef

On Thu, Apr 7, 2016 at 1:53 PM <josef.pktd@gmail.com> wrote:
I think we've misunderstood each other. Sorry if I was unclear. As I've understood the discussion thus far, "raising an error" refers to raising an error when a 1D array is passed used with the syntax a.T2 (for swapping the last two dimensions?). As far as whether or not a.T should raise an error for 1D arrays, that ship has definitely already sailed. I'm making the case that there's value in having an abbreviated syntax that helps prevent errors from accidentally using a 1D array, not that we should change the existing semantics. Cheers, -Ian

On Thu, Apr 7, 2016 at 11:31 AM, <josef.pktd@gmail.com> wrote:
oh well, then no warning, either.
write unit tests with non square 2d arrays and the exception / test error shows up fast.
Guido wrote a note to python-ideas about the conflict between the use cases of "scripting" and "large system development" -- he urged both camps, to respect and listen to each other. I think this is very much a "scripters" issue -- so no unit tests, etc.... For my part, I STILL have to kick myself once in a while for using square arrays in testing/exploration! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Thu, Apr 7, 2016 at 12:18 PM Chris Barker <chris.barker@noaa.gov> wrote:
Hahaha, thanks, yes, in describing a common typo I demonstrated another one. At least this one doesn't fail silently.
Most of the cases I've seen this error have come from people unfamiliar with matlab who, like I said, weren't tracking dimensions quite as carefully as they should have. That said, it's just anecdotal evidence. I wouldn't be at all surprised if this were an issue for matlab users as well. As far as the matrix class goes, we really shouldn't be telling anyone to use that anymore.
Yep. An exception may be the best way forward here. My biggest objection is that the current semantics make it easy for people to silently get unintended behavior.
maybe a warning?
-CHB
-Ian Henriksen

On 7 April 2016 at 11:17, Chris Barker <chris.barker@noaa.gov> wrote:
I'm curious to see use cases where this doesn't solve the problem. The most common operations that I run into: colvec = lambda x: np.c_[x] x = np.array([1, 2, 3]) A = np.arange(9).reshape((3, 3)) 1) x @ x (equivalent to x @ colvec(x)) 2) A @ x (equivalent to A @ colvec(x), apart from the shape) 3) x @ A 4) x @ colvec(x) -- gives an error, but perhaps this should work and be equivalent to np.dot(colvec(x), rowvec(x)) ? If (4) were changed, 1D arrays would mostly* be interpreted as row vectors, and there would be no need for a rowvec function. And we already do that kind of magic for (2). Stéfan * not for special case (1)

On Fri, Apr 8, 2016 at 9:59 AM, Charles R Harris <charlesr.harris@gmail.com> wrote:
I don't follow this. wouldn't it ony be an issue for 1D arrays, rather than the "last index". Or maybe I'm totally missing the point. But anyway, are (N,1) and (1, N) arrays insufficient for representing column and row vectors for some reason? If not -- then we have a way to express a column or row vector, we just need an easier and more obvious way to create them. *maybe* we could have actual column and row vector classes -- they would BE regular arrays, with (1,N) or (N,1) dimensions, and act the same in every way except their __repr__. and we're provide handy factor functions for them. These were needed to complete the old Matrix class -- which is no longer needed now that we have @ (i.e. a 2D array IS a matrix) Note: this is not very well thought out! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Fri, Apr 8, 2016 at 5:11 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
if a is 1d or twod vcol, and vrow and vcol could also be 2d arrays (not a single row or col) this is just a part of a long linear algebra expression 1d dot 1d is different from vrow dot vcol A dot 1d is different from A dot vcol. There intentional differences in the linear algebra behavior of 1d versus a col or row vector. One of those is dropping the extra dimension. We are using this a lot to switch between 1-d and 2-d cases. And another great thing about numpy is that often code immediately generalizes from 1-d to 2d with just some tiny adjustments. (I haven't played with @ yet) I worry that making the 1-d arrays suddenly behave ambiguously as weird 1-d/2-d mixture will make code more inconsistent and more difficult to follow. shortcuts and variations of atleast_2d sound fine, but not implicitly Josef

On Thu, Apr 7, 2016 at 4:04 PM Stéfan van der Walt <stefanv@berkeley.edu> wrote:
Thinking this over a bit more, I think a broadcasting transpose that errors out on arrays that are less than 2D would cover the use cases of which I'm aware. The biggest things to me are having a broadcasting 2D transpose and having some form of transpose that doesn't silently pass 1D arrays through unchanged. Adding properties like colvec and rowvec has less bearing on the use cases I'm aware of, but they both provide nice syntax sugar for common reshape operations. It seems like it would cover all the needed cases for simplifying expressions involving matrix multiplication. It's not totally clear what the semantics should be for higher dimensional arrays or 2D arrays that already have a shape incompatible with the one desired. Best, -Ian Henriksen

On 4/8/2016 4:28 PM, Ian Henriksen wrote:
The biggest things to me are having a broadcasting 2D transpose and having some form of transpose that doesn't silently pass 1D arrays through unchanged.
This comment, like much of this thread, seems to long for the matrix class but not want to actually use it. It seems pretty simple to me: if you want everything forced to 2d, always use the matrix class. If you want to use arrays, they work nicely now, and they work as expected once you understand what you are working with. (I.e., *not* matrices.) Btw, numpy.outer(a, b) produces an outer product. This may be off topic, but it seemed to me that some of the discussion overlooks this. I suggest that anyone who thinks numpy is falling short in this area point out how Mma has addressed this shortcoming. Wolfram will never be accused of a reluctance to add functions when there is a perceived need ... Cheers, Alan Isaac

On Fri, Apr 8, 2016 at 2:09 PM, Alan Isaac <alan.isaac@gmail.com> wrote:
Note the word "broadcasting" -- he doesn't want 2d matrices, he wants tools that make it easy to work with stacks of 2d matrices stored in 2-or-more-dimensional arrays. -n -- Nathaniel J. Smith -- https://vorpus.org

On Fri, Apr 8, 2016 at 4:04 PM Alan Isaac <alan.isaac@gmail.com> wrote:
Sorry if there's any misunderstanding here. Map doesn't really help much. That'd only be good for dealing with three dimensional cases and you'd get a list of arrays, not a view with the appropriate axes swapped. np.einsum('...ji', a) np.swapaxes(a, -1, -2) np.rollaxis(a, -1, -2) all do the right thing, but they are all fairly verbose for such a simple operation. Here's a simple example of when such a thing would be useful. With 2D arrays you can write a.dot(b.T) If you want to have that same operation follow the existing gufunc broadcasting semantics you end up having to write one of the following np.einsum('...ij,...kj', a, b) a @ np.swapaxes(a, -1, -2) a @ np.rollaxis(a, -1, -2) None of those are very concise, and, when I look at them, I don't usually think "that does a.dot(b.T)." If we introduced the T2 syntax, this would be valid: a @ b.T2 It makes the intent much clearer. This helps readability even more when you're trying to put together something that follows a larger equation while still broadcasting correctly. Does this help make the use cases a bit clearer? Best, -Ian Henriksen

On Fri, Apr 8, 2016 at 4:37 PM, Ian Henriksen < insertinterestingnamehere@gmail.com> wrote:
would: a @ colvector(b) work too? or is T2 generalized to more than one column? (though I suppose colvector() could support more than one column also -- weird though that might be.) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Mon, Apr 11, 2016 at 5:24 PM Chris Barker <chris.barker@noaa.gov> wrote:
Right, so I've opted to withdraw my support for having the T2 syntax prepend dimensions when the array has fewer than two dimensions. Erroring out in the 1D case addresses my concerns well enough. The colvec/rowvec idea seems nice too, but it matters a bit less to me, so I'll leave that discussion open for others to follow up on. Having T2 be a broadcasting transpose is a bit more general than any semantics for rowvec/colvec that I can think of. Here are specific arrays that, in the a @ b.T2 can only be handled using some sort of transpose: a = np.random.rand(2, 3, 4) b = np.random.rand(2, 1, 3, 4) Using these inputs, the expression a @ b.T2 would have the shape (2, 2, 3, 3). All the T2 property would be doing is a transpose that has similar broadcasting semantics to matmul, solve, inv, and the other gufuncs. The primary difference with those other functions is that transposes would be done as views whereas the other operations, because of the computations they perform, all have to return new output arrays. Hope this helps, -Ian Henriksen

On Thu, Apr 7, 2016 at 10:00 AM, Ian Henriksen <insertinterestingnamehere@gmail.com> wrote:
I feel like this is an argument for named axes, and broadcasting rules that respect those names, as in xarray? There's been some speculative discussion about adding something along these lines to numpy, though nothing that's even reached the half-baked stage. -n -- Nathaniel J. Smith -- https://vorpus.org

On 06/04/2016 04:11, Todd wrote:
Hello, My two cents here, I've seen hundreds of people (literally hundreds) stumbling on this .T trick with 1D vectors when they were trying to do some linear algebra with numpy so at first I had the same feeling as you. But the real issue was that *all* these people were coming from matlab and expected numpy to behave the same way. Once the logic behind 1D vectors was explained it made sense to most of them and there were no more problems. And by the way I don't see any way to tell apart a 1D "row vector" from a 1D "column vector", think of a code mixing a Rn=>R jacobian matrix and some data supposed to be used as measurements in a linear system, so we have J=np.array([1,2,3,4]) and B=np.array([5,6,7,8]), what would the output of J.T2 and B.T2 be ? I think it's much better to get used to writing J=np.array([1,2,3,4]).reshape(1,4) and B=np.array([5,6,7,8]).reshape(4,1), then you can use .T and @ without any verbosity and at least if forces users (read "my students" here) to think twice before writing some linear algebra nonsense. Regards.

On Thu, Apr 7, 2016 at 3:39 AM, Irvin Probst <irvin.probst@ensta-bretagne.fr
wrote:
The problem isn't necessarily understanding, although that is a problem. The bigger problem is having to jump through hoops to do basic matrix math.
As I said elsewhere, we already have a convention for this established by `np.atleast_2d`. 1D arrays are treated as row vectors. `np.hstack` and `np.vstack` also treat 1D arrays as row vectors. So `arr.T2` will follow this convention, being equivalent to `np.atleast_2d(arr).T`.
That works okay when you know beforehand what the shape of the array is (although it may very well be the different between a simple, 1-line piece of code and a 3-line piece of code). But what if you try to turn this into a general-purpose function? Then any function that has linear algebra needs to call `atleast_2d` on every value used in that linear algebra, or use `if` tests. And if you forget, it may not be obvious until much later depending on what you initially use the function for and what you use it for later.
participants (14)
-
Alan Isaac
-
Charles R Harris
-
Chris Barker
-
Chris Barker - NOAA Federal
-
Ian Henriksen
-
Irvin Probst
-
josef.pktd@gmail.com
-
Joseph Martinot-Lagarde
-
Juan Nunez-Iglesias
-
Matthew Brett
-
Nathaniel Smith
-
Sebastian Berg
-
Stéfan van der Walt
-
Todd