![](https://secure.gravatar.com/avatar/ddac6a2495e6fce7349fdfa5f3902dba.jpg?s=120&d=mm&r=g)
Hi All, Based on discussion with Marten on github <https://github.com/numpy/numpy/issues/13797>, I have a couple of suggestions on syntax improvements on array transpose operations. First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate. Second, the adding an array method that operates like a normal transpose. To my understanding, "A.tranpose()" currently inverts the usual order of all dimensions. This may be useful in some applications involving tensors, but is not what I would usually assume a transpose on a multi-dimensional array would entail. I suggest a syntax of "A.MT" to indicate a transpose of the last two dimensions by default, maybe with optional arguments (i,j) to indicate which two dimensions to transpose. I'm new to this mailing list format, hopefully I'm doing this right :) Thanks, Stew
![](https://secure.gravatar.com/avatar/209654202cde8ec709dee0a4d23c717d.jpg?s=120&d=mm&r=g)
This might be contentious, but I wonder if, with a long enough deprecation cycle, we can change the meaning of .T. That would look like: * Emit a future warning on `more_than_2d.T` with a message like "in future .T will transpose just the last two dimensions, not all dimensions. Use are.transpose() if transposing all {n} dimensions is deliberate" * Wait 5 releases or so, see how many matches Google / GitHub has for this warning. * If the impact is minimal, change .T * If the impact is large, change to a deprecation warning An argument for this approach: a good amount of code I've seen in the wild already assumes T is a 2d transpose, and as a result does not work correctly when called with stacks of arrays. Changing T might fix this broken code automatically. If the change would be too intrusive, then keeping the deprecation warning at least prevents new users deliberately using .T for >2d transposes, which is possibly valuable for readers. Eric On Sun, Jun 23, 2019, 12:05 Stewart Clelland <stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github <https://github.com/numpy/numpy/issues/13797>, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
Second, the adding an array method that operates like a normal transpose. To my understanding, "A.tranpose()" currently inverts the usual order of all dimensions. This may be useful in some applications involving tensors, but is not what I would usually assume a transpose on a multi-dimensional array would entail. I suggest a syntax of "A.MT" to indicate a transpose of the last two dimensions by default, maybe with optional arguments (i,j) to indicate which two dimensions to transpose.
I'm new to this mailing list format, hopefully I'm doing this right :)
Thanks, Stew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/1198e2d145718c841565712312e04227.jpg?s=120&d=mm&r=g)
+1 for this. I have often seen (and sometimes written) code that does this automatically, and it is a common mistake. However, we will need some way to filter for intent, as the people who write this code are the ones who didn’t read docs on it at the time, and so there might be a fair amount of noise even if it fixes their code. I also agree that a transpose of an array with ndim > 2 doesn’t make sense without specifying the order, at least for the applications I have seen so far. Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: NumPy-Discussion <numpy-discussion-bounces+einstein.edison=gmail.com@python.org> on behalf of Eric Wieser <wieser.eric+numpy@gmail.com> Sent: Sunday, June 23, 2019 9:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Syntax Improvement for Array Transpose This might be contentious, but I wonder if, with a long enough deprecation cycle, we can change the meaning of .T. That would look like: * Emit a future warning on `more_than_2d.T` with a message like "in future .T will transpose just the last two dimensions, not all dimensions. Use are.transpose() if transposing all {n} dimensions is deliberate" * Wait 5 releases or so, see how many matches Google / GitHub has for this warning. * If the impact is minimal, change .T * If the impact is large, change to a deprecation warning An argument for this approach: a good amount of code I've seen in the wild already assumes T is a 2d transpose, and as a result does not work correctly when called with stacks of arrays. Changing T might fix this broken code automatically. If the change would be too intrusive, then keeping the deprecation warning at least prevents new users deliberately using .T for >2d transposes, which is possibly valuable for readers. Eric On Sun, Jun 23, 2019, 12:05 Stewart Clelland <stewartclelland@gmail.com<mailto:stewartclelland@gmail.com>> wrote: Hi All, Based on discussion with Marten on github<https://github.com/numpy/numpy/issues/13797>, I have a couple of suggestions on syntax improvements on array transpose operations. First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT<http://A.HT>" might be a viable candidate. Second, the adding an array method that operates like a normal transpose. To my understanding, "A.tranpose()" currently inverts the usual order of all dimensions. This may be useful in some applications involving tensors, but is not what I would usually assume a transpose on a multi-dimensional array would entail. I suggest a syntax of "A.MT<http://A.MT>" to indicate a transpose of the last two dimensions by default, maybe with optional arguments (i,j) to indicate which two dimensions to transpose. I'm new to this mailing list format, hopefully I'm doing this right :) Thanks, Stew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org<mailto:NumPy-Discussion@python.org> https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Sun, 2019-06-23 at 19:51 +0000, Hameer Abbasi wrote:
+1 for this. I have often seen (and sometimes written) code that does this automatically, and it is a common mistake.
Yeah, likely worth a short. I doubt many uses for the n-dimensional axis transpose, so maybe a futurewarning approach can work. If not, I suppose the solution is the deprecation for ndim != 2. Another point about the `.T` is the 1-dimensional case, which commonly causes confusion. If we do something here, should think about that as well. - Sebastian
However, we will need some way to filter for intent, as the people who write this code are the ones who didn’t read docs on it at the time, and so there might be a fair amount of noise even if it fixes their code.
I also agree that a transpose of an array with ndim > 2 doesn’t make sense without specifying the order, at least for the applications I have seen so far.
Get Outlook for iOS
From: NumPy-Discussion < numpy-discussion-bounces+einstein.edison=gmail.com@python.org> on behalf of Eric Wieser <wieser.eric+numpy@gmail.com> Sent: Sunday, June 23, 2019 9:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Syntax Improvement for Array Transpose
This might be contentious, but I wonder if, with a long enough deprecation cycle, we can change the meaning of .T. That would look like:
* Emit a future warning on `more_than_2d.T` with a message like "in future .T will transpose just the last two dimensions, not all dimensions. Use are.transpose() if transposing all {n} dimensions is deliberate" * Wait 5 releases or so, see how many matches Google / GitHub has for this warning. * If the impact is minimal, change .T * If the impact is large, change to a deprecation warning
An argument for this approach: a good amount of code I've seen in the wild already assumes T is a 2d transpose, and as a result does not work correctly when called with stacks of arrays. Changing T might fix this broken code automatically.
If the change would be too intrusive, then keeping the deprecation warning at least prevents new users deliberately using .T for >2d transposes, which is possibly valuable for readers.
Eric
On Sun, Jun 23, 2019, 12:05 Stewart Clelland < stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
Second, the adding an array method that operates like a normal transpose. To my understanding, "A.tranpose()" currently inverts the usual order of all dimensions. This may be useful in some applications involving tensors, but is not what I would usually assume a transpose on a multi-dimensional array would entail. I suggest a syntax of "A.MT" to indicate a transpose of the last two dimensions by default, maybe with optional arguments (i,j) to indicate which two dimensions to transpose.
I'm new to this mailing list format, hopefully I'm doing this right :)
Thanks, Stew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/03f2d50ce2e8d713af6058d2aeafab74.jpg?s=120&d=mm&r=g)
On Sun, Jun 23, 2019 at 10:37 PM Sebastian Berg <sebastian@sipsolutions.net> wrote:
Yeah, likely worth a short. I doubt many uses for the n-dimensional axis transpose, so maybe a futurewarning approach can work. If not, I suppose the solution is the deprecation for ndim != 2.
Any chance that the n-dimensional transpose is being used in code interfacing fortran/matlab and python? One thing the current multidimensional transpose is good for is to switch between row-major and column-major order. I don't know, however, whether this switch actually has to be done often in code, in practice. András
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Sun, 2019-06-23 at 23:03 +0200, Andras Deak wrote:
On Sun, Jun 23, 2019 at 10:37 PM Sebastian Berg <sebastian@sipsolutions.net> wrote:
Yeah, likely worth a short. I doubt many uses for the n-dimensional axis transpose, so maybe a futurewarning approach can work. If not, I suppose the solution is the deprecation for ndim != 2.
Any chance that the n-dimensional transpose is being used in code interfacing fortran/matlab and python? One thing the current multidimensional transpose is good for is to switch between row-major and column-major order. I don't know, however, whether this switch actually has to be done often in code, in practice.
I suppose there is a chance for that, to fix the order for returned arrays (for input arrays you probably need to fix the memory order, so that `copy(..., order="F")` or `np.ensure` is more likely what you want. Those users should be fine to switch over to `arr.transpose()`. The question is mostly if it hits so much code that it is painful. - Sebastian
András _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Hi All, I'd love to have `.T` mean the right thing, and am happy that people are suggesting it after I told Steward this was likely off-limits (which, in fairness, did seem to be the conclusion when we visited this before...). But is there something we can do to make it possible to use it already but ensure that code on previous numpy versions breaks? (Or works, but that seems impossible...) For instance, in python2, one had `from __future__ import division (etc.); could we have, e.g., a `from numpy.__future__ import matrix_transpose`, which, when imported, implied that `.T` just did the right thing without any warning? (Obviously, since that __future__.matrix_transpose wouldn't exist on older versions of numpy, it would correctly break the code when used with those.) Also, a bit more towards the original request in the PR of a hermitian transpose, if we're trying to go for `.T` eventually having the obvious meaning, should we directly move towards also having `.H` as a short-cut for `.T.conj()`? We could even expose that only with the above future import - otherwise, the risk of abuse of `.T` would only grow... Finally, on the meaning of `.T` for 1-D arrays, the sensible choices would seem to (1) error; or (2) change shape to `(n, 1)`. Since while writing this sentence I changed my preference twice, I guess I should go for erroring (I think we need a separate solution for easily making stacks of row/column vectors). All the best, Marten On Sun, Jun 23, 2019 at 4:37 PM Sebastian Berg <sebastian@sipsolutions.net> wrote:
On Sun, 2019-06-23 at 19:51 +0000, Hameer Abbasi wrote:
+1 for this. I have often seen (and sometimes written) code that does this automatically, and it is a common mistake.
Yeah, likely worth a short. I doubt many uses for the n-dimensional axis transpose, so maybe a futurewarning approach can work. If not, I suppose the solution is the deprecation for ndim != 2.
Another point about the `.T` is the 1-dimensional case, which commonly causes confusion. If we do something here, should think about that as well.
- Sebastian
However, we will need some way to filter for intent, as the people who write this code are the ones who didn’t read docs on it at the time, and so there might be a fair amount of noise even if it fixes their code.
I also agree that a transpose of an array with ndim > 2 doesn’t make sense without specifying the order, at least for the applications I have seen so far.
Get Outlook for iOS
From: NumPy-Discussion < numpy-discussion-bounces+einstein.edison=gmail.com@python.org> on behalf of Eric Wieser <wieser.eric+numpy@gmail.com> Sent: Sunday, June 23, 2019 9:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Syntax Improvement for Array Transpose
This might be contentious, but I wonder if, with a long enough deprecation cycle, we can change the meaning of .T. That would look like:
* Emit a future warning on `more_than_2d.T` with a message like "in future .T will transpose just the last two dimensions, not all dimensions. Use are.transpose() if transposing all {n} dimensions is deliberate" * Wait 5 releases or so, see how many matches Google / GitHub has for this warning. * If the impact is minimal, change .T * If the impact is large, change to a deprecation warning
An argument for this approach: a good amount of code I've seen in the wild already assumes T is a 2d transpose, and as a result does not work correctly when called with stacks of arrays. Changing T might fix this broken code automatically.
If the change would be too intrusive, then keeping the deprecation warning at least prevents new users deliberately using .T for >2d transposes, which is possibly valuable for readers.
Eric
On Sun, Jun 23, 2019, 12:05 Stewart Clelland < stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
Second, the adding an array method that operates like a normal transpose. To my understanding, "A.tranpose()" currently inverts the usual order of all dimensions. This may be useful in some applications involving tensors, but is not what I would usually assume a transpose on a multi-dimensional array would entail. I suggest a syntax of "A.MT" to indicate a transpose of the last two dimensions by default, maybe with optional arguments (i,j) to indicate which two dimensions to transpose.
I'm new to this mailing list format, hopefully I'm doing this right :)
Thanks, Stew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Sun, 2019-06-23 at 17:12 -0400, Marten van Kerkwijk wrote:
Hi All,
I'd love to have `.T` mean the right thing, and am happy that people are suggesting it after I told Steward this was likely off-limits (which, in fairness, did seem to be the conclusion when we visited this before...). But is there something we can do to make it possible to use it already but ensure that code on previous numpy versions breaks? (Or works, but that seems impossible...)
For instance, in python2, one had `from __future__ import division (etc.); could we have, e.g., a `from numpy.__future__ import matrix_transpose`, which, when imported, implied that `.T` just did the right thing without any warning? (Obviously, since that __future__.matrix_transpose wouldn't exist on older versions of numpy, it would correctly break the code when used with those.)
If I remember correctly, this is actually possible but hacky. So it would probably be nicer to not go there. But yes, you are right, that would mean that we practically limit `.T` to 2-D arrays for at least 2 years.
Also, a bit more towards the original request in the PR of a hermitian transpose, if we're trying to go for `.T` eventually having the obvious meaning, should we directly move towards also having `.H` as a short-cut for `.T.conj()`? We could even expose that only with the above future import - otherwise, the risk of abuse of `.T` would only grow...
This opens the general question of how many and which attributes we actually want on ndarray. My first gut reaction is that I am -0 on it, but OTOH, for some math it is very nice and not a huge amount of clutter...
Finally, on the meaning of `.T` for 1-D arrays, the sensible choices would seem to (1) error; or (2) change shape to `(n, 1)`. Since while writing this sentence I changed my preference twice, I guess I should go for erroring (I think we need a separate solution for easily making stacks of row/column vectors).
Probably an error is good, which is nice, because we can just tag on a warning and not worry about it for a while ;).
All the best,
Marten
On Sun, Jun 23, 2019 at 4:37 PM Sebastian Berg < sebastian@sipsolutions.net> wrote:
On Sun, 2019-06-23 at 19:51 +0000, Hameer Abbasi wrote:
+1 for this. I have often seen (and sometimes written) code that does this automatically, and it is a common mistake.
Yeah, likely worth a short. I doubt many uses for the n-dimensional axis transpose, so maybe a futurewarning approach can work. If not, I suppose the solution is the deprecation for ndim != 2.
Another point about the `.T` is the 1-dimensional case, which commonly causes confusion. If we do something here, should think about that as well.
- Sebastian
However, we will need some way to filter for intent, as the
who write this code are the ones who didn’t read docs on it at
time, and so there might be a fair amount of noise even if it fixes their code.
I also agree that a transpose of an array with ndim > 2 doesn’t make sense without specifying the order, at least for the applications I have seen so far.
Get Outlook for iOS
From: NumPy-Discussion < numpy-discussion-bounces+einstein.edison=gmail.com@python.org> on behalf of Eric Wieser <wieser.eric+numpy@gmail.com> Sent: Sunday, June 23, 2019 9:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Syntax Improvement for Array Transpose
This might be contentious, but I wonder if, with a long enough deprecation cycle, we can change the meaning of .T. That would look like:
* Emit a future warning on `more_than_2d.T` with a message like "in future .T will transpose just the last two dimensions, not all dimensions. Use are.transpose() if transposing all {n} dimensions is deliberate" * Wait 5 releases or so, see how many matches Google / GitHub has for this warning. * If the impact is minimal, change .T * If the impact is large, change to a deprecation warning
An argument for this approach: a good amount of code I've seen in
people the the
wild already assumes T is a 2d transpose, and as a result does not work correctly when called with stacks of arrays. Changing T might fix this broken code automatically.
If the change would be too intrusive, then keeping the deprecation warning at least prevents new users deliberately using .T for >2d transposes, which is possibly valuable for readers.
Eric
On Sun, Jun 23, 2019, 12:05 Stewart Clelland < stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
Second, the adding an array method that operates like a normal transpose. To my understanding, "A.tranpose()" currently inverts the usual order of all dimensions. This may be useful in some applications involving tensors, but is not what I would usually assume a transpose on a multi- dimensional array would entail. I suggest a syntax of "A.MT" to indicate a transpose of the last two dimensions by default, maybe with optional arguments (i,j) to indicate which two dimensions to transpose.
I'm new to this mailing list format, hopefully I'm doing this right :)
Thanks, Stew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/209654202cde8ec709dee0a4d23c717d.jpg?s=120&d=mm&r=g)
If I remember correctly, [numpy.future imports are] actually possible but hacky. So it would probably be nicer to not go there. There was some discussion of this at https://stackoverflow.com/q/29905278/102441. I agree with the conclusion we should not go there - in particular, note that every builtin __future__ feature has been an interpreter-level change, not an object-level change. from __future__ import division changes the meaning of / not of int.__div__. Framing the numpy change this way would mean rewriting Attribute(obj, attr, Load) ast nodes to Call(np._attr_override, obj, attr), which is obvious not interoperable with any other module wanting to do the same thing. This opens other unpleasant cans of worms about “builtin” modules that perform attribute access: Should getattr(arr, 'T') change behavior based on the module that calls it? Should operator.itemgetter('T') change behavior ? So I do not think we want to go down that road. On Sun, 23 Jun 2019 at 14:28, Sebastian Berg <sebastian@sipsolutions.net> wrote:
On Sun, 2019-06-23 at 17:12 -0400, Marten van Kerkwijk wrote:
Hi All,
I'd love to have `.T` mean the right thing, and am happy that people are suggesting it after I told Steward this was likely off-limits (which, in fairness, did seem to be the conclusion when we visited this before...). But is there something we can do to make it possible to use it already but ensure that code on previous numpy versions breaks? (Or works, but that seems impossible...)
For instance, in python2, one had `from __future__ import division (etc.); could we have, e.g., a `from numpy.__future__ import matrix_transpose`, which, when imported, implied that `.T` just did the right thing without any warning? (Obviously, since that __future__.matrix_transpose wouldn't exist on older versions of numpy, it would correctly break the code when used with those.)
If I remember correctly, this is actually possible but hacky. So it would probably be nicer to not go there. But yes, you are right, that would mean that we practically limit `.T` to 2-D arrays for at least 2 years.
Also, a bit more towards the original request in the PR of a hermitian transpose, if we're trying to go for `.T` eventually having the obvious meaning, should we directly move towards also having `.H` as a short-cut for `.T.conj()`? We could even expose that only with the above future import - otherwise, the risk of abuse of `.T` would only grow...
This opens the general question of how many and which attributes we actually want on ndarray. My first gut reaction is that I am -0 on it, but OTOH, for some math it is very nice and not a huge amount of clutter...
Finally, on the meaning of `.T` for 1-D arrays, the sensible choices would seem to (1) error; or (2) change shape to `(n, 1)`. Since while writing this sentence I changed my preference twice, I guess I should go for erroring (I think we need a separate solution for easily making stacks of row/column vectors).
Probably an error is good, which is nice, because we can just tag on a warning and not worry about it for a while ;).
All the best,
Marten
On Sun, Jun 23, 2019 at 4:37 PM Sebastian Berg < sebastian@sipsolutions.net> wrote:
On Sun, 2019-06-23 at 19:51 +0000, Hameer Abbasi wrote:
+1 for this. I have often seen (and sometimes written) code that does this automatically, and it is a common mistake.
Yeah, likely worth a short. I doubt many uses for the n-dimensional axis transpose, so maybe a futurewarning approach can work. If not, I suppose the solution is the deprecation for ndim != 2.
Another point about the `.T` is the 1-dimensional case, which commonly causes confusion. If we do something here, should think about that as well.
- Sebastian
However, we will need some way to filter for intent, as the
who write this code are the ones who didn’t read docs on it at
time, and so there might be a fair amount of noise even if it fixes their code.
I also agree that a transpose of an array with ndim > 2 doesn’t make sense without specifying the order, at least for the applications I have seen so far.
Get Outlook for iOS
From: NumPy-Discussion < numpy-discussion-bounces+einstein.edison=gmail.com@python.org> on behalf of Eric Wieser <wieser.eric+numpy@gmail.com> Sent: Sunday, June 23, 2019 9:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Syntax Improvement for Array Transpose
This might be contentious, but I wonder if, with a long enough deprecation cycle, we can change the meaning of .T. That would look like:
* Emit a future warning on `more_than_2d.T` with a message like "in future .T will transpose just the last two dimensions, not all dimensions. Use are.transpose() if transposing all {n} dimensions is deliberate" * Wait 5 releases or so, see how many matches Google / GitHub has for this warning. * If the impact is minimal, change .T * If the impact is large, change to a deprecation warning
An argument for this approach: a good amount of code I've seen in
people the the
wild already assumes T is a 2d transpose, and as a result does not work correctly when called with stacks of arrays. Changing T might fix this broken code automatically.
If the change would be too intrusive, then keeping the deprecation warning at least prevents new users deliberately using .T for >2d transposes, which is possibly valuable for readers.
Eric
On Sun, Jun 23, 2019, 12:05 Stewart Clelland < stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
Second, the adding an array method that operates like a normal transpose. To my understanding, "A.tranpose()" currently inverts the usual order of all dimensions. This may be useful in some applications involving tensors, but is not what I would usually assume a transpose on a multi- dimensional array would entail. I suggest a syntax of "A.MT" to indicate a transpose of the last two dimensions by default, maybe with optional arguments (i,j) to indicate which two dimensions to transpose.
I'm new to this mailing list format, hopefully I'm doing this right :)
Thanks, Stew _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
I had not looked at any implementation (only remembered the nice idea of "importing from the future"), and looking at the links Eric shared, it seems that the only way this would work is, effectively, pre-compilation doing a `<codetext>.replace('.T', '._T_from_the_future')`, where you'd be hoping that there never is any other meaning for a `.T` attribute, for any class, since it is impossible to be sure a given variable is an ndarray. (Actually, a lot less implausible than for the case of numpy indexing discussed in the link...) Anyway, what I had in mind was something along the lines of inside the `.T` code there being be a check on whether a particular future item was present in the environment. But thinking more, I can see that it is not trivial to get to know something about the environment in which the code that called you was written.... So, it seems there is no (simple) way to tell numpy that inside a given module you want `.T` to have the new behaviour, but still to warn if outside the module it is used in the old way (when risky)? -- Marten p.s. I'm somewhat loath to add new properties to ndarray, but `.T` and `.H` have such obvious and clear meaning to anyone dealing with (complex) matrices that I think it is worth it. See https://mail.python.org/pipermail/numpy-discussion/2019-June/079584.html for a list of options of attributes that we might deprecate "in exchange"... All the best, Marten
![](https://secure.gravatar.com/avatar/81e62cb212edf2a8402c842b120d9f31.jpg?s=120&d=mm&r=g)
Please don't introduce more errors for 1D arrays. They are already very counter-intuitive for transposition and for other details not relevant to this issue. Emitting errors for such a basic operation is very bad for user experience. This already is the case with wildly changing slicing syntax. It would have made sense if 2D arrays were the default objects and 1D required extra effort to create. But it is the other way around. Hence a transpose operation is "expected" from it. This would kind of force all NumPy users to shift their code one tab further to accomodate for the extra try, catch blocks for "Oh wait, what if a 1D array comes in?" checks for the existence of transposability everytime I write down `.T` in the code. Code example; I am continuously writing code involving lots of matrix products with inverses and transposes/hermitians (say, the 2nd eq., https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_cont... ) That means I have to check at least 4-6 matrices if any of them are transposable to make that equation go through. The dot-H solution is actually my ideal choice but I get the point that the base namespace is already crowded. I am even OK with having `x.conj(T=True)` having a keyword for extra transposition so that I can get away with `x.conj(1)`; it doesn't solve the fundamental issue but at least gives some convenience. Best, ilhan On Mon, Jun 24, 2019 at 3:11 AM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
I had not looked at any implementation (only remembered the nice idea of "importing from the future"), and looking at the links Eric shared, it seems that the only way this would work is, effectively, pre-compilation doing a `<codetext>.replace('.T', '._T_from_the_future')`, where you'd be hoping that there never is any other meaning for a `.T` attribute, for any class, since it is impossible to be sure a given variable is an ndarray. (Actually, a lot less implausible than for the case of numpy indexing discussed in the link...)
Anyway, what I had in mind was something along the lines of inside the `.T` code there being be a check on whether a particular future item was present in the environment. But thinking more, I can see that it is not trivial to get to know something about the environment in which the code that called you was written....
So, it seems there is no (simple) way to tell numpy that inside a given module you want `.T` to have the new behaviour, but still to warn if outside the module it is used in the old way (when risky)?
-- Marten
p.s. I'm somewhat loath to add new properties to ndarray, but `.T` and `.H` have such obvious and clear meaning to anyone dealing with (complex) matrices that I think it is worth it. See https://mail.python.org/pipermail/numpy-discussion/2019-June/079584.html for a list of options of attributes that we might deprecate "in exchange"...
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/1198e2d145718c841565712312e04227.jpg?s=120&d=mm&r=g)
Given that np.dot and np.matmul do the right thing for 1-D arrays, I would be opposed to introducing an error as well. From: NumPy-Discussion <numpy-discussion-bounces+einstein.edison=gmail.com@python.org> on behalf of Ilhan Polat <ilhanpolat@gmail.com> Reply-To: Discussion of Numerical Python <numpy-discussion@python.org> Date: Monday, 24. June 2019 at 11:58 To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: Re: [Numpy-discussion] Syntax Improvement for Array Transpose Please don't introduce more errors for 1D arrays. They are already very counter-intuitive for transposition and for other details not relevant to this issue. Emitting errors for such a basic operation is very bad for user experience. This already is the case with wildly changing slicing syntax. It would have made sense if 2D arrays were the default objects and 1D required extra effort to create. But it is the other way around. Hence a transpose operation is "expected" from it. This would kind of force all NumPy users to shift their code one tab further to accomodate for the extra try, catch blocks for "Oh wait, what if a 1D array comes in?" checks for the existence of transposability everytime I write down `.T` in the code. Code example; I am continuously writing code involving lots of matrix products with inverses and transposes/hermitians (say, the 2nd eq., https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_cont... ) That means I have to check at least 4-6 matrices if any of them are transposable to make that equation go through. The dot-H solution is actually my ideal choice but I get the point that the base namespace is already crowded. I am even OK with having `x.conj(T=True)` having a keyword for extra transposition so that I can get away with `x.conj(1)`; it doesn't solve the fundamental issue but at least gives some convenience. Best, ilhan On Mon, Jun 24, 2019 at 3:11 AM Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> wrote: I had not looked at any implementation (only remembered the nice idea of "importing from the future"), and looking at the links Eric shared, it seems that the only way this would work is, effectively, pre-compilation doing a `<codetext>.replace('.T', '._T_from_the_future')`, where you'd be hoping that there never is any other meaning for a `.T` attribute, for any class, since it is impossible to be sure a given variable is an ndarray. (Actually, a lot less implausible than for the case of numpy indexing discussed in the link...) Anyway, what I had in mind was something along the lines of inside the `.T` code there being be a check on whether a particular future item was present in the environment. But thinking more, I can see that it is not trivial to get to know something about the environment in which the code that called you was written.... So, it seems there is no (simple) way to tell numpy that inside a given module you want `.T` to have the new behaviour, but still to warn if outside the module it is used in the old way (when risky)? -- Marten p.s. I'm somewhat loath to add new properties to ndarray, but `.T` and `.H` have such obvious and clear meaning to anyone dealing with (complex) matrices that I think it is worth it. See https://mail.python.org/pipermail/numpy-discussion/2019-June/079584.html for a list of options of attributes that we might deprecate "in exchange"... All the best, Marten _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Dear Hameer, Ilhan, Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`. Your argument about `dot` and `matmul` having similar behaviour certainly adds weight (but then, as I wrote before, my opinion on this changes by the second, so I'm very happy to defer to others who have a clearer sense of what is the right thing to do here!). I think my main worry now is how to get to be able to use a new state without having to wait 4..6 releases... All the best, Marten
![](https://secure.gravatar.com/avatar/1198e2d145718c841565712312e04227.jpg?s=120&d=mm&r=g)
Hello Marten, I was suggesting not changing the shape at all, since dot/matmul/solve do the right thing already in such a case. In my proposal, only for ndim >=2 do we switch the last two dimensions. Ilhan is right that adding a special case for ndim=1 (error) adds programmer overhead, which is against the general philosophy of NumPy I feel. Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: NumPy-Discussion <numpy-discussion-bounces+einstein.edison=gmail.com@python.org> on behalf of Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> Sent: Monday, June 24, 2019 3:24 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Syntax Improvement for Array Transpose Dear Hameer, Ilhan, Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`. Your argument about `dot` and `matmul` having similar behaviour certainly adds weight (but then, as I wrote before, my opinion on this changes by the second, so I'm very happy to defer to others who have a clearer sense of what is the right thing to do here!). I think my main worry now is how to get to be able to use a new state without having to wait 4..6 releases... All the best, Marten
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
I think we need to do something about the 1D case. I know from a strict mathematical standpoint it doesn't do anything, and philosophically we should avoid special cases, but I think the current solution leads to enough confusion and silently doing an unexpected thing that I think we need a better approach. Personally I think it is a nonsensical operation and so should result in an exception, but at the very least I think it needs to raise a warning. On Mon, Jun 24, 2019, 09:54 Hameer Abbasi <einstein.edison@gmail.com> wrote:
Hello Marten,
I was suggesting not changing the shape at all, since dot/matmul/solve do the right thing already in such a case.
In my proposal, only for ndim >=2 do we switch the last two dimensions.
Ilhan is right that adding a special case for ndim=1 (error) adds programmer overhead, which is against the general philosophy of NumPy I feel.
Get Outlook for iOS <https://aka.ms/o0ukef>
------------------------------ *From:* NumPy-Discussion <numpy-discussion-bounces+einstein.edison= gmail.com@python.org> on behalf of Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> *Sent:* Monday, June 24, 2019 3:24 PM *To:* Discussion of Numerical Python *Subject:* Re: [Numpy-discussion] Syntax Improvement for Array Transpose
Dear Hameer, Ilhan,
Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`.
Your argument about `dot` and `matmul` having similar behaviour certainly adds weight (but then, as I wrote before, my opinion on this changes by the second, so I'm very happy to defer to others who have a clearer sense of what is the right thing to do here!).
I think my main worry now is how to get to be able to use a new state without having to wait 4..6 releases...
All the best,
Marten _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/a5c6e0b8f64a8a1940f5b2d367c1db6e.jpg?s=120&d=mm&r=g)
Points of reference: Mathematica: https://reference.wolfram.com/language/ref/Transpose.html Matlab: https://www.mathworks.com/help/matlab/ref/permute.html Personally I would find any divergence between a.T and a.transpose() to be rather surprising. Cheers, Alan Isaac
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
I think the corresponding MATLAB function/operation is this: https://www.mathworks.com/help/matlab/ref/transpose.html On Mon, Jun 24, 2019, 10:33 Alan Isaac <alan.isaac@gmail.com> wrote:
Points of reference: Mathematica: https://reference.wolfram.com/language/ref/Transpose.html Matlab: https://www.mathworks.com/help/matlab/ref/permute.html
Personally I would find any divergence between a.T and a.transpose() to be rather surprising.
Cheers, Alan Isaac _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/a5c6e0b8f64a8a1940f5b2d367c1db6e.jpg?s=120&d=mm&r=g)
Iirc, that works only on (2-d) matrices. Cheers, Alan Isaac On 6/24/2019 10:45 AM, Todd wrote:
I think the corresponding MATLAB function/operation is this:
![](https://secure.gravatar.com/avatar/f246a75a3ed39b6b92079ae0fa9e5852.jpg?s=120&d=mm&r=g)
On Mon, 24 Jun 2019, at 11:25 PM, Marten van Kerkwijk wrote:
Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`.
Just to chime in as a user: v.T should continue to be a silent no-op for 1D arrays. NumPy makes it arbitrary whether a 1D array is viewed as a row or column vector, but we often want to write .T to match the notation in a paper we're implementing. More deeply, I think .T should never change the number of dimensions of an array. I'm ambivalent about the whole discussion in this thread, but generally I think NumPy should err on the side of caution when deprecating behaviour. It's unclear to me whether the benefits of making .T transpose only the last two dimensions outweigh the costs of deprecation. Despite some people's assertion that those using .T to transpose >2D arrays are probably making a mistake, we have two perfectly correct uses in scikit-image. These could be easily changed to .transpose() (honestly they probably should!), but they illustrate that there is some amount of correct code out there that would be forced to keep up with an update here.
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Hi Juan, On Tue, Jun 25, 2019 at 9:35 AM Juan Nunez-Iglesias <jni.soma@gmail.com> wrote:
On Mon, 24 Jun 2019, at 11:25 PM, Marten van Kerkwijk wrote:
Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`.
Just to chime in as a user: v.T should continue to be a silent no-op for 1D arrays. NumPy makes it arbitrary whether a 1D array is viewed as a row or column vector, but we often want to write .T to match the notation in a paper we're implementing.
More deeply, I think .T should never change the number of dimensions of an array.
OK, that makes three of you, all agreeing on the same core argument, but with you now adding another strong one, of not changing the number of dimensions. Let's consider this aspect settled.
I'm ambivalent about the whole discussion in this thread, but generally I think NumPy should err on the side of caution when deprecating behaviour. It's unclear to me whether the benefits of making .T transpose only the last two dimensions outweigh the costs of deprecation. Despite some people's assertion that those using .T to transpose >2D arrays are probably making a mistake, we have two perfectly correct uses in scikit-image. These could be easily changed to .transpose() (honestly they probably should!), but they illustrate that there is some amount of correct code out there that would be forced to keep up with an update here.
Fair enough, there are people who actually read the manual and use things correctly! Though, being generally one of those, I still was very disappointed to find `.T` didn't do the last two axes. Any preference for alternative spellings? All the best, Marten
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 9:35 AM Juan Nunez-Iglesias <jni.soma@gmail.com> wrote:
On Mon, 24 Jun 2019, at 11:25 PM, Marten van Kerkwijk wrote:
Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`.
Just to chime in as a user: v.T should continue to be a silent no-op for 1D arrays. NumPy makes it arbitrary whether a 1D array is viewed as a row or column vector, but we often want to write .T to match the notation in a paper we're implementing.
Why should it be silent? This is a source of bugs. At least in my experience, generally when people write v.T it is a mistake. Either they are coming from another language that works differently, or they failed to properly check their function arguments. And if you are doing it on purpose, you are doing something you know is a no-op for essentially documentation purposes, and I would think that is the sort of thing you need to make as explicit as possible. "Errors should never pass silently. Unless explicitly silenced." So as I said, I think at the very least this should be a warning. People who are doing this on purpose can easily silence (or just ignore) the warning, but it will help people who do it by mistake.
![](https://secure.gravatar.com/avatar/93a76a800ef6c5919baa8ba91120ee98.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 7:20 AM Todd <toddrjen@gmail.com> wrote:
On Tue, Jun 25, 2019 at 9:35 AM Juan Nunez-Iglesias <jni.soma@gmail.com> wrote:
On Mon, 24 Jun 2019, at 11:25 PM, Marten van Kerkwijk wrote:
Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`.
Just to chime in as a user: v.T should continue to be a silent no-op for 1D arrays. NumPy makes it arbitrary whether a 1D array is viewed as a row or column vector, but we often want to write .T to match the notation in a paper we're implementing.
Why should it be silent? This is a source of bugs. At least in my experience, generally when people write v.T it is a mistake. Either they are coming from another language that works differently, or they failed to properly check their function arguments. And if you are doing it on purpose, you are doing something you know is a no-op for essentially documentation purposes, and I would think that is the sort of thing you need to make as explicit as possible. "Errors should never pass silently. Unless explicitly silenced."
Writing v.T is also sensible if you're writing code that could apply equally well to either a single vector or a stack of vectors. This mirrors the behavior of @, which also allows either single vectors or stacks of vectors (matrices) with the same syntax.
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 10:42 AM Stephan Hoyer <shoyer@gmail.com> wrote:
On Tue, Jun 25, 2019 at 7:20 AM Todd <toddrjen@gmail.com> wrote:
On Tue, Jun 25, 2019 at 9:35 AM Juan Nunez-Iglesias <jni.soma@gmail.com> wrote:
On Mon, 24 Jun 2019, at 11:25 PM, Marten van Kerkwijk wrote:
Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape of `(n, 1)` the right behaviour? I.e., it should still change from what it is now - which is to leave the shape at `(n,)`.
Just to chime in as a user: v.T should continue to be a silent no-op for 1D arrays. NumPy makes it arbitrary whether a 1D array is viewed as a row or column vector, but we often want to write .T to match the notation in a paper we're implementing.
Why should it be silent? This is a source of bugs. At least in my experience, generally when people write v.T it is a mistake. Either they are coming from another language that works differently, or they failed to properly check their function arguments. And if you are doing it on purpose, you are doing something you know is a no-op for essentially documentation purposes, and I would think that is the sort of thing you need to make as explicit as possible. "Errors should never pass silently. Unless explicitly silenced."
Writing v.T is also sensible if you're writing code that could apply equally well to either a single vector or a stack of vectors. This mirrors the behavior of @, which also allows either single vectors or stacks of vectors (matrices) with the same syntax.
Fair enough. But although there are valid reasons to do a divide by zero, it still causes a warning because it is a problem often enough that people should be made aware of it. I think this is a similar scenario.
![](https://secure.gravatar.com/avatar/a5c6e0b8f64a8a1940f5b2d367c1db6e.jpg?s=120&d=mm&r=g)
On 6/25/2019 11:03 AM, Todd wrote:
Fair enough. But although there are valid reasons to do a divide by zero, it still causes a warning because it is a problem often enough that people should be made aware of it. I think this is a similar scenario.
I side with Stephan on this, but when there are opinions on both sides, I wonder what the resolution strategy is. I suppose there is a possible tension: 1. Existing practice should be privileged (no change for the sake of change). 2. Documented user issues need to be addressed. So what is an "in the wild" example of where numpy users create errors that pass silently because a 1-d array transpose did not behave as expected? Why would the unexpected array shape of the result not alert the user if it happens? In your favor, Mathematica's `Transpose` raises an error when applied to 1d arrays, and the Mma designs are usually carefully considered. Cheers, Alan Isaac
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 11:47 AM Alan Isaac <alan.isaac@gmail.com> wrote:
Fair enough. But although there are valid reasons to do a divide by zero, it still causes a warning because it is a problem often enough that
On 6/25/2019 11:03 AM, Todd wrote: people should be made aware of it. I
think this is a similar scenario.
I side with Stephan on this, but when there are opinions on both sides, I wonder what the resolution strategy is. I suppose there is a possible tension:
1. Existing practice should be privileged (no change for the sake of change). 2. Documented user issues need to be addressed.
Note that the behavior wouldn't change. Transposing vectors would do exactly what it has always done: nothing. But people would be made aware that the operation they are doing won't actually do anything. I completely agree that change for the sake of change is not a good thing. But we are talking about a no-op here. If someone is intentionally doing something that does nothing, I would like to think that they could deal with a warning that can be easily silenced.
So what is an "in the wild" example of where numpy users create errors that pass silently because a 1-d array transpose did not behave as expected?
Part of the problem with silent errors is that we typically aren't going to see them, by definition. The only way you could catch a silent error like that is if someone noticed the results looked different than they expected, but that can easily be hidden if the error is a corner case that is averaged out. That is the whole point of having a warning, to make it not silent. It reminds me of the old Weisert quote, "As far as we know, our computer has never had an undetected error." The problems I typically encounter is when people write their code assuming that, for example, a trial will have multiple results. It usually does, but on occasion it doesn't. This sort of thing usually results in an error, although it is typically an error far removed from where the problem actually occurs and is therefor extremely hard to debug. I haven't seen truly completely silent errors, but again I wouldn't expect to. We can't really tell how common this sort of thing is until we actively check for it. Remember how many silent errors in encoding were caught once python3 starting enforcing proper encoding/decoding handling? People insisted encoding was being handled properly with python2, but it wasn't even in massive, mature projects. People just didn't notice the problems before because they were silent. At the very least, the warning could tell people coming from other languages why the transpose is doing something different than they expect, as this is not an uncommon issue on stackoverflow. [1]
Why would the unexpected array shape of the result not alert the user if it happens?
I think counting on the code to produce an error is really dangerous. I have seen people do a lot of really bizarre things with their code.
In your favor, Mathematica's `Transpose` raises an error when applied to 1d arrays, and the Mma designs are usually carefully considered.
Yes, numpy is really the outlier here in making this a silent no-op. MATLAB, Julia, R, and SAS all transpose vectors, coercing them to matrices if needed. Again, I don't think we should change how the transpose works, it is too late for that. But I do think that people should be warned about it. [1] https://stackoverflow.com/search?q=numpy+transpose+vector (not all of these are relevant, but there are a bunch on there)
![](https://secure.gravatar.com/avatar/93a76a800ef6c5919baa8ba91120ee98.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 10:14 AM Todd <toddrjen@gmail.com> wrote:
On Tue, Jun 25, 2019 at 11:47 AM Alan Isaac <alan.isaac@gmail.com> wrote:
Fair enough. But although there are valid reasons to do a divide by zero, it still causes a warning because it is a problem often enough that
On 6/25/2019 11:03 AM, Todd wrote: people should be made aware of it. I
think this is a similar scenario.
I side with Stephan on this, but when there are opinions on both sides, I wonder what the resolution strategy is. I suppose there is a possible tension:
1. Existing practice should be privileged (no change for the sake of change). 2. Documented user issues need to be addressed.
Note that the behavior wouldn't change. Transposing vectors would do exactly what it has always done: nothing. But people would be made aware that the operation they are doing won't actually do anything.
I completely agree that change for the sake of change is not a good thing. But we are talking about a no-op here. If someone is intentionally doing something that does nothing, I would like to think that they could deal with a warning that can be easily silenced.
I am strongly opposed to adding warnings for documented and correct behavior that we are not going to change. Warnings are only appropriate in rare cases that demand user's attention, i.e., code that is almost certainly not correct, like division by 0. We have already documented use cases for .T on 1D arrays, such as compatibility with operations also defined on 2D arrays. I also agree with Alan that probably it's too late to change the behavior of .T for arrays with more than 2-dimensions. NumPy could certainly use a more comprehensive policy around backwards compatibility, but we certainly need to meet a *very* high bar to break backwards compatibility. I am skeptical that the slightly cleaner code facilitated by this new definition for .T would be worth it.
So what is an "in the wild" example of where numpy users create errors that pass silently because a 1-d array transpose did not behave as expected?
Part of the problem with silent errors is that we typically aren't going to see them, by definition. The only way you could catch a silent error like that is if someone noticed the results looked different than they expected, but that can easily be hidden if the error is a corner case that is averaged out. That is the whole point of having a warning, to make it not silent. It reminds me of the old Weisert quote, "As far as we know, our computer has never had an undetected error."
The problems I typically encounter is when people write their code assuming that, for example, a trial will have multiple results. It usually does, but on occasion it doesn't. This sort of thing usually results in an error, although it is typically an error far removed from where the problem actually occurs and is therefor extremely hard to debug. I haven't seen truly completely silent errors, but again I wouldn't expect to.
We can't really tell how common this sort of thing is until we actively check for it. Remember how many silent errors in encoding were caught once python3 starting enforcing proper encoding/decoding handling? People insisted encoding was being handled properly with python2, but it wasn't even in massive, mature projects. People just didn't notice the problems before because they were silent.
At the very least, the warning could tell people coming from other languages why the transpose is doing something different than they expect, as this is not an uncommon issue on stackoverflow. [1]
Why would the unexpected array shape of the result not alert the user if it happens?
I think counting on the code to produce an error is really dangerous. I have seen people do a lot of really bizarre things with their code.
In your favor, Mathematica's `Transpose` raises an error when applied to 1d arrays, and the Mma designs are usually carefully considered.
Yes, numpy is really the outlier here in making this a silent no-op. MATLAB, Julia, R, and SAS all transpose vectors, coercing them to matrices if needed. Again, I don't think we should change how the transpose works, it is too late for that. But I do think that people should be warned about it.
[1] https://stackoverflow.com/search?q=numpy+transpose+vector (not all of these are relevant, but there are a bunch on there) _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
Hi, On Tue, Jun 25, 2019 at 10:57 AM Stephan Hoyer <shoyer@gmail.com> [snip] ...
I also agree with Alan that probably it's too late to change the behavior of .T for arrays with more than 2-dimensions. NumPy could certainly use a more comprehensive policy around backwards compatibility, but we certainly need to meet a *very* high bar to break backwards compatibility. I am skeptical that the slightly cleaner code facilitated by this new definition for .T would be worth it.
I feel strongly that we should have the following policy: * Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy. On the other hand, it's OK (with a suitable period of deprecation) for correct old code to raise an informative error with new Numpy. That means that a.T deprecation -> a.T error -> a.T means a.MT is forbidden, but a.T deprecation -> a.T error is OK. Cheers, Matthew
![](https://secure.gravatar.com/avatar/93a76a800ef6c5919baa8ba91120ee98.jpg?s=120&d=mm&r=g)
On Sun, Jun 23, 2019 at 10:05 PM Stewart Clelland <stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github <https://github.com/numpy/numpy/issues/13797>, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
I agree that short-hand for the Hermitian transpose would make sense, though I would try to stick with "A.H". It's one of the last reasons to prefer the venerable np.matrix. NumPy arrays already has loads of methods/properties, and this is a case (like @ for matrix multiplication) where the operator significantly improves readability: consider "(x.H @ M @ x) / (x.H @ x)" vs "(x.conj().T @ M @ x) / (x.conj().T @ x)" [1]. Nearly everyone who does linear algebra with complex numbers would find this useful. If I recall correctly, the last time this came up, it was suggested that we might implement this with NumPy view as a "complex conjugate" dtype rather than a memory copy. This would allow the operation to be essentially free. I find this very appealing, both due to symmetry with ".T" and because of the principle that properties should be cheap to compute. So my tentative vote would be (1) yes, let's do the short-hand attribute, but (2) let's wait until we have a complex conjugate dtype that do this efficiently. My hope is that this should be relatively doable in a year or two after current dtype refactor/usability effect comes to fruition. Best, Stephan [1] I copied the first non-trivial example off the Wikipedia page for a Hermitian matrix: https://en.wikipedia.org/wiki/Hermitian_matrix
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
On Mon, Jun 24, 2019 at 11:00 AM Stephan Hoyer <shoyer@gmail.com> wrote:
On Sun, Jun 23, 2019 at 10:05 PM Stewart Clelland < stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github <https://github.com/numpy/numpy/issues/13797>, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
I agree that short-hand for the Hermitian transpose would make sense, though I would try to stick with "A.H". It's one of the last reasons to prefer the venerable np.matrix. NumPy arrays already has loads of methods/properties, and this is a case (like @ for matrix multiplication) where the operator significantly improves readability: consider "(x.H @ M @ x) / (x.H @ x)" vs "(x.conj().T @ M @ x) / (x.conj().T @ x)" [1]. Nearly everyone who does linear algebra with complex numbers would find this useful.
If I recall correctly, the last time this came up, it was suggested that we might implement this with NumPy view as a "complex conjugate" dtype rather than a memory copy. This would allow the operation to be essentially free. I find this very appealing, both due to symmetry with ".T" and because of the principle that properties should be cheap to compute.
So my tentative vote would be (1) yes, let's do the short-hand attribute, but (2) let's wait until we have a complex conjugate dtype that do this efficiently. My hope is that this should be relatively doable in a year or two after current dtype refactor/usability effect comes to fruition.
Best, Stephan
[1] I copied the first non-trivial example off the Wikipedia page for a Hermitian matrix: https://en.wikipedia.org/wiki/Hermitian_matrix
I would call it .CT or something like that, based on the term "Conjugate transpose". Wikipedia redirects "Hermitian transpose" to "Conjugate transpose", and google has 49,800 results for "Hermitian transpose" vs 201,000 for "Conjugate transpose" (both with quotes). So "Conjugate transpose" seems to be the more widely-known name. Further, I think what a "Conjugate transpose" does is immediately obvious to someone who isn't already familiar with the term so long as they know what a "conjugate" and "transpose" are, while no one would be able to tell what a "Hermitian transpose" unless they are already familiar with the name. So I have no problem calling it a "Hermitian transpose" somewhere in the docs, but I think the naming and documentation should focus on the "Conjugate transpose" term.
![](https://secure.gravatar.com/avatar/93a76a800ef6c5919baa8ba91120ee98.jpg?s=120&d=mm&r=g)
On Mon, Jun 24, 2019 at 8:10 AM Todd <toddrjen@gmail.com> wrote:
On Mon, Jun 24, 2019 at 11:00 AM Stephan Hoyer <shoyer@gmail.com> wrote:
On Sun, Jun 23, 2019 at 10:05 PM Stewart Clelland < stewartclelland@gmail.com> wrote:
Hi All,
Based on discussion with Marten on github <https://github.com/numpy/numpy/issues/13797>, I have a couple of suggestions on syntax improvements on array transpose operations.
First, introducing a shorthand for the Hermitian Transpose operator. I thought "A.HT" might be a viable candidate.
I agree that short-hand for the Hermitian transpose would make sense, though I would try to stick with "A.H". It's one of the last reasons to prefer the venerable np.matrix. NumPy arrays already has loads of methods/properties, and this is a case (like @ for matrix multiplication) where the operator significantly improves readability: consider "(x.H @ M @ x) / (x.H @ x)" vs "(x.conj().T @ M @ x) / (x.conj().T @ x)" [1]. Nearly everyone who does linear algebra with complex numbers would find this useful.
If I recall correctly, the last time this came up, it was suggested that we might implement this with NumPy view as a "complex conjugate" dtype rather than a memory copy. This would allow the operation to be essentially free. I find this very appealing, both due to symmetry with ".T" and because of the principle that properties should be cheap to compute.
So my tentative vote would be (1) yes, let's do the short-hand attribute, but (2) let's wait until we have a complex conjugate dtype that do this efficiently. My hope is that this should be relatively doable in a year or two after current dtype refactor/usability effect comes to fruition.
Best, Stephan
[1] I copied the first non-trivial example off the Wikipedia page for a Hermitian matrix: https://en.wikipedia.org/wiki/Hermitian_matrix
I would call it .CT or something like that, based on the term "Conjugate transpose". Wikipedia redirects "Hermitian transpose" to "Conjugate transpose", and google has 49,800 results for "Hermitian transpose" vs 201,000 for "Conjugate transpose" (both with quotes). So "Conjugate transpose" seems to be the more widely-known name. Further, I think what a "Conjugate transpose" does is immediately obvious to someone who isn't already familiar with the term so long as they know what a "conjugate" and "transpose" are, while no one would be able to tell what a "Hermitian transpose" unless they are already familiar with the name. So I have no problem calling it a "Hermitian transpose" somewhere in the docs, but I think the naming and documentation should focus on the "Conjugate transpose" term.
Sure, we should absolutely document the name as the "Conjugate transpose". But the standard mathematical notation is definitely "A^H" rather than "A^{CT}".
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Hi Stephan, Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H` property. I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to try to avoid using names of people where you have to be in the "in crowd" to know what it means. The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version. But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either. All the best, Marten
![](https://secure.gravatar.com/avatar/81e62cb212edf2a8402c842b120d9f31.jpg?s=120&d=mm&r=g)
I think enumerating the cases along the way makes it a bit more tangible for the discussion import numpy as np z = 1+1j z.conjugate() # 1-1j zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None] is known zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None] is known zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops # For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's permute() # with static dimorder. and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive. On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H` property.
I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to try to avoid using names of people where you have to be in the "in crowd" to know what it means.
The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version.
But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either.
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b5fbd2bac8ddc5fd368b497e43e9d905.jpg?s=120&d=mm&r=g)
I would love for there to be .H property. I have .conj().T in almost every math function that I write so that it will be general enough for complex numbers. Besides being less readable, what puts me in a bind is trying to accommodate LinearOperator/LinearMap like duck type objects in place of matrix inputs, such as an object that does an FFT, but acts like a matrix and supports @. For my objects to work in my code, I have to create .conj() and .T methods which are not as simple as defining .H (the adjoint) for say an FFT. Sometimes I just define .T to be the adjoint/conjugate transpose, and .conj() to do nothing so it will work with my code and I can avoid making useless objects along the way, but then I am in a weird state where np.asarray(A).T != np.asarray(A.T). In my opinion, the matrix transpose operator and the conjugate transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification. Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product. -Cameron On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think enumerating the cases along the way makes it a bit more tangible for the discussion
import numpy as np z = 1+1j z.conjugate() # 1-1j
zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same
zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None] is known
zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None] is known zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed
zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops
# For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's permute() # with static dimorder.
and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive.
On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H` property.
I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to try to avoid using names of people where you have to be in the "in crowd" to know what it means.
The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version.
But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either.
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/03f2d50ce2e8d713af6058d2aeafab74.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote:
In my opinion, the matrix transpose operator and the conjugate transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification.
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else. András
Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product.
-Cameron
On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think enumerating the cases along the way makes it a bit more tangible for the discussion
import numpy as np z = 1+1j z.conjugate() # 1-1j
zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same
zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None] is known
zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None] is known zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed
zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops
# For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's permute() # with static dimorder.
and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive.
On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> wrote:
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H` property.
I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to try to avoid using names of people where you have to be in the "in crowd" to know what it means.
The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version.
But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either.
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/81e62cb212edf2a8402c842b120d9f31.jpg?s=120&d=mm&r=g)
I have to disagree, I hardly ever saw such bugs and moreover <Zbar, Z> is not compatible if you don't also transpose it but expected in almost all contexts of matrices, vectors and scalars. Elementwise conjugation is well inline with other elementwise operations starting with a dot in matlab hence still consistent. I would still expect an conjugation+transposition to be the default since just transposing a complex array is way more special and rare than its ubiquitous regular usage. ilhan On Tue, Jun 25, 2019 at 10:57 AM Andras Deak <deak.andris@gmail.com> wrote:
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote:
In my opinion, the matrix transpose operator and the conjugate transpose
operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification.
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else.
András
Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product.
-Cameron
On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think enumerating the cases along the way makes it a bit more
tangible for the discussion
import numpy as np z = 1+1j z.conjugate() # 1-1j
zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same
zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None] is
known
zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None] is
zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed
zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops
# For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's
# with static dimorder.
and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive.
On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but I
don't quite see why we would wait for that with introducing the `.H`
I do agree that `.H` is the correct name, giving most immediate
clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to
known permute() property. try to avoid using names of people where you have to be in the "in crowd" to know what it means.
The above said, if we were going with the initial suggestion of `.MT`
for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version.
But it seems there is little interest in that suggestion, although
sadly a clear path forward has not yet emerged either.
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/03f2d50ce2e8d713af6058d2aeafab74.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 1:03 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I have to disagree, I hardly ever saw such bugs <snip>
I know the exact behaviour of MATLAB isn't very relevant for this discussion, but anyway the reason I think this is a problem in MATLAB is that there are a bunch of confused questions on Stack Overflow due to this behaviour. Just from the first page this[1] query I could find examples [2-8] (quite a few are about porting MATLAB to numpy or vice versa).
<snip> and moreover <Zbar, Z> is not compatible if you don't also transpose it but expected in almost all contexts of matrices, vectors and scalars. Elementwise conjugation is well inline with other elementwise operations starting with a dot in matlab hence still consistent.
I probably misunderstood your point here, Ilhan, because it sounds to me that you're arguing that conjugation should not come without a transpose. This is different from saying that transpose should not come without conjugation (although I'd object to both). And `.'` is exactly _not_ an elementwise operation in MATLAB: it's the transpose, despite the seemingly element-wise syntax. `arr.'` will _not_ conjugate your array, `arr'` will (while both will transpose). Finally, I don't think "MATLAB does it" is a very good argument anyway; my subjective impression is that several of the issues with np.matrix are due to behaviour that resembles that of MATLAB. But MATLAB is very much built for matrices, and there are no 1d arrays, so they don't end up with some of the pitfalls that numpy does.
I would still expect an conjugation+transposition to be the default since just transposing a complex array is way more special and rare than its ubiquitous regular usage.
Coming back to numpy, I disagree with your statement. I'd say "just transposing a complex _matrix_ is way more special and rare than its ubiquitous regular usage", which is true. Admittedly I have a patchy math background, but to me it seems that the "conjugation and transpose go hand in hand" claim is mostly valid for linear algebra, i.e. actual matrices and vectors. However numpy arrays are much more general, and I very often want to reverse the shape of a complex non-matrix 2d array (i.e. transpose it) for purposes of broadcasting or vectorized matrix operations, and not want to conjugate it in the process. Do you at least agree that the feature of conjugate+transpose as default mostly makes sense for linear algebra, or am I missing other typical (and general numerical programming) use cases? András [1]: https://stackoverflow.com/search?q=%5Bmatlab%5D+conjugate+transpose+is%3Aa&mixed=1 [2]: https://stackoverflow.com/a/45272576 [3]: https://stackoverflow.com/a/54179564 [4]: https://stackoverflow.com/a/42320906 [5]: https://stackoverflow.com/a/23510668 [6]: https://stackoverflow.com/a/11416502 [7]: https://stackoverflow.com/a/49057640 [8]: https://stackoverflow.com/a/54309764
ilhan
On Tue, Jun 25, 2019 at 10:57 AM Andras Deak <deak.andris@gmail.com> wrote:
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote:
In my opinion, the matrix transpose operator and the conjugate transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification.
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else.
András
Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product.
-Cameron
On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think enumerating the cases along the way makes it a bit more tangible for the discussion
import numpy as np z = 1+1j z.conjugate() # 1-1j
zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same
zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None] is known
zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None] is known zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed
zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops
# For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's permute() # with static dimorder.
and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive.
On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> wrote:
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H` property.
I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to try to avoid using names of people where you have to be in the "in crowd" to know what it means.
The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version.
But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either.
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Hi All, The examples with different notation brought back memory of another solution: define `m.ᵀ` and m.ᴴ`. This is possible, since python3 allows any unicode for names, nicely readable, but admittedly a bit annoying to enter (in emacs, set-input-method to TeX and then ^T, ^H). More seriously, still hoping to move to just being able to use .T and .H as matrix (conjugate) transpose in newer numpy versions, is it really not possible within a property to know whether the context where the operation was defined has some specific "matrix_transpose" variable set? After all, an error or warning generates a stack backtrace and from the ndarray C code one would have to look only one stack level up (inside a warning, one can even ask for the warning to be given from inside a different level, if I recall correctly). If that is truly impossible, then I think we need different names for both .T and .H. Some suggestions: 1. a.MT, a.MH (original suggestion at top of the thread) 2. a.mT, a.mH (m still for matrix, but no standing out as much, maybe making it is easier to guess what is means) 3. a.RT and .CT (regular and conjugate transpose - the C also reminds of complex) All the best, Marten
![](https://secure.gravatar.com/avatar/a5c6e0b8f64a8a1940f5b2d367c1db6e.jpg?s=120&d=mm&r=g)
I wish this discussion would be clearer that a.T is not going anywhere, should not change, and in any case should match a.transpose(). Anything else threatens to break existing code for no good payoff. How many people in this discussion are proposing that a widely used library like numpy should make a breaking change in syntax just because someone guesses it won't break too much code "out there"? I'm having trouble telling if this is an actual view. Because a.T cannot reasonably change, if a.H is allowed, it should mean a.conj().transpose(). This also supports the easiest and least buggy transition away from np.matrix. But since `a.H` would not be a view of `a`, most probably any `a.H` proposal should be discarded as misleading and not materially better than the existing syntax (a.conj().T). I trust nobody is proposing to change `transpose`. Cheers, Alan Isaac
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
That is how it is in your field, but not mine. For us we only use the conventional transpose, even for complex numbers. And I routinely see bugs in MATLAB because of its choice of defaults, and there are probably many more that don't get caught because they happen silently. I think the principle of least surprise should apply here. For people who need the conjugate transform the know to make sure they use the right operation. But a lot of people aren't even aware that there conjugate transpose exists, they are just going to copy what they see in the examples without realizing it does the completely wrong thing in certain cases. They wouldn't bother to check because they don't even know there is a second transpose operation they need to look out for. So it would hurt a lot of people without helping anyone. On Tue, Jun 25, 2019, 07:03 Ilhan Polat <ilhanpolat@gmail.com> wrote:
I have to disagree, I hardly ever saw such bugs and moreover <Zbar, Z> is not compatible if you don't also transpose it but expected in almost all contexts of matrices, vectors and scalars. Elementwise conjugation is well inline with other elementwise operations starting with a dot in matlab hence still consistent.
I would still expect an conjugation+transposition to be the default since just transposing a complex array is way more special and rare than its ubiquitous regular usage.
ilhan
On Tue, Jun 25, 2019 at 10:57 AM Andras Deak <deak.andris@gmail.com> wrote:
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote:
In my opinion, the matrix transpose operator and the conjugate
transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification.
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else.
András
Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product.
-Cameron
On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think enumerating the cases along the way makes it a bit more
tangible for the discussion
import numpy as np z = 1+1j z.conjugate() # 1-1j
zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same
zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None]
is known
zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None]
zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed
zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops
# For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's
# with static dimorder.
and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive.
On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but
I don't quite see why we would wait for that with introducing the `.H`
I do agree that `.H` is the correct name, giving most immediate
clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to
is known permute() property. try to avoid using names of people where you have to be in the "in crowd" to know what it means.
The above said, if we were going with the initial suggestion of `.MT`
for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version.
But it seems there is little interest in that suggestion, although
sadly a clear path forward has not yet emerged either.
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/81e62cb212edf2a8402c842b120d9f31.jpg?s=120&d=mm&r=g)
I think we would have seen a lot of evidence in the last four decades if this was that problematic. You are the second person to memtion these bugs. Care to show me some examples of these bugs? Maybe I am missing the point here. I haven't seen any bugs because somebody thought they are just transposing. Using transpose to reshape an array is a different story. That we can discuss. On Tue, Jun 25, 2019, 16:10 Todd <toddrjen@gmail.com> wrote:
That is how it is in your field, but not mine. For us we only use the conventional transpose, even for complex numbers. And I routinely see bugs in MATLAB because of its choice of defaults, and there are probably many more that don't get caught because they happen silently.
I think the principle of least surprise should apply here. For people who need the conjugate transform the know to make sure they use the right operation. But a lot of people aren't even aware that there conjugate transpose exists, they are just going to copy what they see in the examples without realizing it does the completely wrong thing in certain cases. They wouldn't bother to check because they don't even know there is a second transpose operation they need to look out for. So it would hurt a lot of people without helping anyone.
On Tue, Jun 25, 2019, 07:03 Ilhan Polat <ilhanpolat@gmail.com> wrote:
I have to disagree, I hardly ever saw such bugs and moreover <Zbar, Z> is not compatible if you don't also transpose it but expected in almost all contexts of matrices, vectors and scalars. Elementwise conjugation is well inline with other elementwise operations starting with a dot in matlab hence still consistent.
I would still expect an conjugation+transposition to be the default since just transposing a complex array is way more special and rare than its ubiquitous regular usage.
ilhan
On Tue, Jun 25, 2019 at 10:57 AM Andras Deak <deak.andris@gmail.com> wrote:
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote:
In my opinion, the matrix transpose operator and the conjugate
transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification.
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else.
András
Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product.
-Cameron
On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think enumerating the cases along the way makes it a bit more
tangible for the discussion
import numpy as np z = 1+1j z.conjugate() # 1-1j
zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same
zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None]
is known
zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None]
zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed
zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops
# For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's
# with static dimorder.
and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive.
On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but
I don't quite see why we would wait for that with introducing the `.H`
I do agree that `.H` is the correct name, giving most immediate
clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to
is known permute() property. try to avoid using names of people where you have to be in the "in crowd" to know what it means.
The above said, if we were going with the initial suggestion of
`.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version.
But it seems there is little interest in that suggestion, although
sadly a clear path forward has not yet emerged either.
All the best,
Marten
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/998f5c5403f3657437a3afbf6a16e24b.jpg?s=120&d=mm&r=g)
I was saying we shouldn't change the default transpose operation to be conjugate transpose. We don't currently have a conjugate transpose so it isn't an issue. I think having a conjugate transpose is a great idea, I just don't think it should be the default. On Tue, Jun 25, 2019 at 12:12 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think we would have seen a lot of evidence in the last four decades if this was that problematic.
You are the second person to memtion these bugs. Care to show me some examples of these bugs?
Maybe I am missing the point here. I haven't seen any bugs because somebody thought they are just transposing.
Using transpose to reshape an array is a different story. That we can discuss.
On Tue, Jun 25, 2019, 16:10 Todd <toddrjen@gmail.com> wrote:
That is how it is in your field, but not mine. For us we only use the conventional transpose, even for complex numbers. And I routinely see bugs in MATLAB because of its choice of defaults, and there are probably many more that don't get caught because they happen silently.
I think the principle of least surprise should apply here. For people who need the conjugate transform the know to make sure they use the right operation. But a lot of people aren't even aware that there conjugate transpose exists, they are just going to copy what they see in the examples without realizing it does the completely wrong thing in certain cases. They wouldn't bother to check because they don't even know there is a second transpose operation they need to look out for. So it would hurt a lot of people without helping anyone.
On Tue, Jun 25, 2019, 07:03 Ilhan Polat <ilhanpolat@gmail.com> wrote:
I have to disagree, I hardly ever saw such bugs and moreover <Zbar, Z> is not compatible if you don't also transpose it but expected in almost all contexts of matrices, vectors and scalars. Elementwise conjugation is well inline with other elementwise operations starting with a dot in matlab hence still consistent.
I would still expect an conjugation+transposition to be the default since just transposing a complex array is way more special and rare than its ubiquitous regular usage.
ilhan
On Tue, Jun 25, 2019 at 10:57 AM Andras Deak <deak.andris@gmail.com> wrote:
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote:
In my opinion, the matrix transpose operator and the conjugate
transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification.
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else.
András
Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product.
-Cameron
On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think enumerating the cases along the way makes it a bit more
tangible for the discussion
import numpy as np z = 1+1j z.conjugate() # 1-1j
zz = np.array(z) zz # array(1+1j) zz.T # array(1+1j) # OK expected. zz.conj() # 1-1j ?? what happened; no arrays? zz.conjugate() # 1-1j ?? same
zz1d = np.array([z]*3) zz1d.T # no change so this is not the regular 2D array zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) zz1d[:, None].conj() # 2D column vector - no surprises if [:, None]
is known
zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None]
zz2d.conj() # 2D col vec conjugated zz2d.conj().T # 2D col vec conjugated transposed
zz3d = np.arange(24.).reshape(2,3,4).view(complex) zz3d.conj() # no surprises, conjugated zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops
# For scalar arrays conjugation strips the number # For 1D arrays transpose is a no-op but conjugation works # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' # and transpose is acting like expected # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' # but transpose is the reversing all dims just like matlab's
# with static dimorder.
and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive.
On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote: > > Hi Stephan, > > Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H`
> > I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to
is known permute() property. try to avoid using names of people where you have to be in the "in crowd" to know what it means.
> > The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version. > > But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either. > > All the best, > > Marten > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@python.org > https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b5fbd2bac8ddc5fd368b497e43e9d905.jpg?s=120&d=mm&r=g)
It seems to me that the general consensus is that we shouldn't be changing .T to do what we've termed matrix transpose or conjugate transpose. As such, the discussion of whether .T should be changed to throw errors or warnings on 1D arrays seems a bit off topic (not that it shouldn't be discussed). My suggestion that conjugate transpose and matrix transpose be a single operation .H was partially because I thought it would fill 90% of the use cases while limiting the added API. The are times that I am batch processing complex-valued images when what I want is .MT with no conjugation, but I just figured those use cases would be rare or not benefit from a short as much, and we could then add .MT later if the demand presented itself. I agree that the fact that the difference in MATLAB is implicit is bad, to me .H is explicit to people working with complex numbers, but I could be wrong. In regards to Marten's earlier post on names, my preference is .mT for matrix transpose. I prefer .H if what is implemented is equivalent to .conj().T, but if what is implemented is equivalent to .conj().mT, then I'd prefer .mH for symmetry. I also hope there is a way to implement .H/.mH without a copy as was briefly discussed above, otherwise .H()/.mH() might be better at making the copy explicit. On Tue, Jun 25, 2019 at 1:17 PM Todd <toddrjen@gmail.com> wrote:
I was saying we shouldn't change the default transpose operation to be conjugate transpose. We don't currently have a conjugate transpose so it isn't an issue. I think having a conjugate transpose is a great idea, I just don't think it should be the default.
On Tue, Jun 25, 2019 at 12:12 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think we would have seen a lot of evidence in the last four decades if this was that problematic.
You are the second person to memtion these bugs. Care to show me some examples of these bugs?
Maybe I am missing the point here. I haven't seen any bugs because somebody thought they are just transposing.
Using transpose to reshape an array is a different story. That we can discuss.
On Tue, Jun 25, 2019, 16:10 Todd <toddrjen@gmail.com> wrote:
That is how it is in your field, but not mine. For us we only use the conventional transpose, even for complex numbers. And I routinely see bugs in MATLAB because of its choice of defaults, and there are probably many more that don't get caught because they happen silently.
I think the principle of least surprise should apply here. For people who need the conjugate transform the know to make sure they use the right operation. But a lot of people aren't even aware that there conjugate transpose exists, they are just going to copy what they see in the examples without realizing it does the completely wrong thing in certain cases. They wouldn't bother to check because they don't even know there is a second transpose operation they need to look out for. So it would hurt a lot of people without helping anyone.
On Tue, Jun 25, 2019, 07:03 Ilhan Polat <ilhanpolat@gmail.com> wrote:
I have to disagree, I hardly ever saw such bugs and moreover <Zbar, Z> is not compatible if you don't also transpose it but expected in almost all contexts of matrices, vectors and scalars. Elementwise conjugation is well inline with other elementwise operations starting with a dot in matlab hence still consistent.
I would still expect an conjugation+transposition to be the default since just transposing a complex array is way more special and rare than its ubiquitous regular usage.
ilhan
On Tue, Jun 25, 2019 at 10:57 AM Andras Deak <deak.andris@gmail.com> wrote:
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote:
In my opinion, the matrix transpose operator and the conjugate
transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification.
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else.
András
Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product.
-Cameron
On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat <ilhanpolat@gmail.com> wrote: > > I think enumerating the cases along the way makes it a bit more tangible for the discussion > > > import numpy as np > z = 1+1j > z.conjugate() # 1-1j > > zz = np.array(z) > zz # array(1+1j) > zz.T # array(1+1j) # OK expected. > zz.conj() # 1-1j ?? what happened; no arrays? > zz.conjugate() # 1-1j ?? same > > zz1d = np.array([z]*3) > zz1d.T # no change so this is not the regular 2D array > zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) > zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) > zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) > zz1d[:, None].conj() # 2D column vector - no surprises if [:, None] is known > > zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None] is known > zz2d.conj() # 2D col vec conjugated > zz2d.conj().T # 2D col vec conjugated transposed > > zz3d = np.arange(24.).reshape(2,3,4).view(complex) > zz3d.conj() # no surprises, conjugated > zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops > > # For scalar arrays conjugation strips the number > # For 1D arrays transpose is a no-op but conjugation works > # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' > # and transpose is acting like expected > # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' > # but transpose is the reversing all dims just like matlab's permute() > # with static dimorder. > > and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive. > > > On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote: >> >> Hi Stephan, >> >> Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H` property. >> >> I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to try to avoid using names of people where you have to be in the "in crowd" to know what it means. >> >> The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version. >> >> But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either. >> >> All the best, >> >> Marten >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion@python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@python.org > https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Tue, 2019-06-25 at 14:18 -0400, Cameron Blocker wrote:
It seems to me that the general consensus is that we shouldn't be changing .T to do what we've termed matrix transpose or conjugate transpose. As such, the discussion of whether .T should be changed to throw errors or warnings on 1D arrays seems a bit off topic (not that it shouldn't be discussed).
Yeah, it is a separate thing and is likely better to be discussed after the other discussion has somewhat settled down.
My suggestion that conjugate transpose and matrix transpose be a single operation .H was partially because I thought it would fill 90% of the use cases while limiting the added API. The are times that I am batch processing complex-valued images when what I want is .MT with no conjugation, but I just figured those use cases would be rare or not benefit from a short as much, and we could then add .MT later if the demand presented itself. I agree that the fact that the difference in MATLAB is implicit is bad, to me .H is explicit to people working with complex numbers, but I could be wrong.
True, a lot of use cases may be happy to use .H just to get the matrix transpose operation (although that relies on no-copy semantics possibly). On the other hand, it might be a confusing change to have .T and .H not be interchangable, which is a point for `.mT` and `mH` even if I am even more hesitant on it right now.
In regards to Marten's earlier post on names, my preference is .mT for matrix transpose. I prefer .H if what is implemented is equivalent to .conj().T, but if what is implemented is equivalent to .conj().mT, then I'd prefer .mH for symmetry.
I also hope there is a way to implement .H/.mH without a copy as was briefly discussed above, otherwise .H()/.mH() might be better at making the copy explicit.
To be honest, the copy/no-copy thing is a going to be a small issue in any case. There is the idea of no-copy for complex values. Which is great, however, that does have the issue that it does not work for object arrays, which have to call `.conjugate()` on each element (and thus have to copy). Which is not to say we cannot do it. If we go there, but it is a confusion (also that .H and .T behave quite differently is). We can return read-only views, which at least fixes one direction of unintentional change here. Making it `.H()` could be better in that regard... - Sebastian
On Tue, Jun 25, 2019 at 1:17 PM Todd <toddrjen@gmail.com> wrote:
I was saying we shouldn't change the default transpose operation to be conjugate transpose. We don't currently have a conjugate transpose so it isn't an issue. I think having a conjugate transpose is a great idea, I just don't think it should be the default.
On Tue, Jun 25, 2019 at 12:12 PM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I think we would have seen a lot of evidence in the last four decades if this was that problematic.
You are the second person to memtion these bugs. Care to show me some examples of these bugs?
Maybe I am missing the point here. I haven't seen any bugs because somebody thought they are just transposing.
Using transpose to reshape an array is a different story. That we can discuss.
On Tue, Jun 25, 2019, 16:10 Todd <toddrjen@gmail.com> wrote:
That is how it is in your field, but not mine. For us we only use the conventional transpose, even for complex numbers. And I routinely see bugs in MATLAB because of its choice of defaults, and there are probably many more that don't get caught because they happen silently.
I think the principle of least surprise should apply here. For people who need the conjugate transform the know to make sure they use the right operation. But a lot of people aren't even aware that there conjugate transpose exists, they are just going to copy what they see in the examples without realizing it does the completely wrong thing in certain cases. They wouldn't bother to check because they don't even know there is a second transpose operation they need to look out for. So it would hurt a lot of people without helping anyone.
On Tue, Jun 25, 2019, 07:03 Ilhan Polat <ilhanpolat@gmail.com> wrote:
I have to disagree, I hardly ever saw such bugs and moreover <Zbar, Z> is not compatible if you don't also transpose it but expected in almost all contexts of matrices, vectors and scalars. Elementwise conjugation is well inline with other elementwise operations starting with a dot in matlab hence still consistent.
I would still expect an conjugation+transposition to be the default since just transposing a complex array is way more special and rare than its ubiquitous regular usage.
ilhan
On Tue, Jun 25, 2019 at 10:57 AM Andras Deak < deak.andris@gmail.com> wrote:
On Tue, Jun 25, 2019 at 4:29 AM Cameron Blocker <cameronjblocker@gmail.com> wrote: > > In my opinion, the matrix transpose operator and the conjugate transpose operator should be one and the same. Something nice about both Julia and MATLAB is that it takes more keystrokes to do a regular transpose instead of a conjugate transpose. Then people who work exclusively with real numbers can just forget that it's a conjugate transpose, and for relatively simple algorithms, their code will just work with complex numbers with little modification. >
I'd argue that MATLAB's feature of `'` meaning adjoint (conjugate transpose etc.) and `.'` meaning regular transpose causes a lot of confusion and probably a lot of subtle bugs. Most people are unaware that `'` does a conjugate transpose and use it habitually, and when for once they have a complex array they don't understand why the values are off (assuming they even notice). Even the MATLAB docs conflate the two operations occasionally, which doesn't help at all. Transpose should _not_ incur conjugation automatically. I'm already a bit wary of special-casing matrix dynamics this much, when ndarrays are naturally multidimensional objects. Making transposes be more than transposes would be a huge mistake in my opinion, already for matrices (2d arrays) and especially for everything else.
András
> Ideally, I'd like to see a .H that was the defacto Matrix/Linear Algebra/Conjugate transpose that for 2 or more dimensions, conjugate transposes the last two dimensions and for 1 dimension just conjugates (if necessary). And then .T can stay the Array/Tensor transpose for general axis manipulation. I'd be okay with .T raising an error/warning on 1D arrays if .H did not. I commonly write things like u.conj().T@v even if I know both u and v are 1D just so it looks more like an inner product. > > -Cameron > > On Mon, Jun 24, 2019 at 6:43 PM Ilhan Polat < ilhanpolat@gmail.com> wrote: >> >> I think enumerating the cases along the way makes it a bit more tangible for the discussion >> >> >> import numpy as np >> z = 1+1j >> z.conjugate() # 1-1j >> >> zz = np.array(z) >> zz # array(1+1j) >> zz.T # array(1+1j) # OK expected. >> zz.conj() # 1-1j ?? what happened; no arrays? >> zz.conjugate() # 1-1j ?? same >> >> zz1d = np.array([z]*3) >> zz1d.T # no change so this is not the regular 2D array >> zz1d.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) >> zz1d.conj().T # array([1.-1.j, 1.-1.j, 1.-1.j]) >> zz1d.T.conj() # array([1.-1.j, 1.-1.j, 1.-1.j]) >> zz1d[:, None].conj() # 2D column vector - no surprises if [:, None] is known >> >> zz2d = zz1d[:, None] # 2D column vector - no surprises if [:, None] is known >> zz2d.conj() # 2D col vec conjugated >> zz2d.conj().T # 2D col vec conjugated transposed >> >> zz3d = np.arange(24.).reshape(2,3,4).view(complex) >> zz3d.conj() # no surprises, conjugated >> zz3d.conj().T # ?? Why not the last two dims swapped like other stacked ops >> >> # For scalar arrays conjugation strips the number >> # For 1D arrays transpose is a no-op but conjugation works >> # For 2D arrays conjugate it is the matlab's elementwise conjugation op .' >> # and transpose is acting like expected >> # For 3D arrays conjugate it is the matlab's elementwise conjugation op .' >> # but transpose is the reversing all dims just like matlab's permute() >> # with static dimorder. >> >> and so on. Maybe we can try to identify all the use cases and the quirks before we can make design the solution. Because these are a bit more involved and I don't even know if this is exhaustive. >> >> >> On Mon, Jun 24, 2019 at 8:21 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote: >>> >>> Hi Stephan, >>> >>> Yes, the complex conjugate dtype would make things a lot faster, but I don't quite see why we would wait for that with introducing the `.H` property. >>> >>> I do agree that `.H` is the correct name, giving most immediate clarity (i.e., people who know what conjugate transpose is, will recognize it, while likely having to look up `.CT`, while people who do not know will have to look up regardless). But at the same time agree that the docstring and other documentation should start with "Conjugate tranpose" - good to try to avoid using names of people where you have to be in the "in crowd" to know what it means. >>> >>> The above said, if we were going with the initial suggestion of `.MT` for matrix transpose, then I'd prefer `.CT` over `.HT` as its conjugate version. >>> >>> But it seems there is little interest in that suggestion, although sadly a clear path forward has not yet emerged either. >>> >>> All the best, >>> >>> Marten >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion@python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion@python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@python.org > https://mail.python.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/864a9aae8fc483e0d7defbe91db93d15.jpg?s=120&d=mm&r=g)
вт, 25 июн. 2019 г. в 21:20, Cameron Blocker <cameronjblocker@gmail.com>:
It seems to me that the general consensus is that we shouldn't be changing .T to do what we've termed matrix transpose or conjugate transpose.
Reading through this thread, I can not say that I have the same opinion - at first, many looked positively at the possibility of change - `arr.T` to mean a transpose of the last two dimensions by default, and then people start discussing several different (albeit related) topics at once. So, I want to point out that it is rather difficult to follow what is currently discussed in this thread, probably because several different (albeit related) topics are being discussed at once. I would suggest at first discuss `arr.T` change, because other topics somewhat depend on that (`arr.MT`/`arr.CT`/`arr.H` and others). p.s: Documentation about `.T` shows only two examples, for 1d - to show that it works and for 2d case. Maybe it means something? (especially for new `numpy` users. ) with kind regards, -gdg
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 10:17 PM Kirill Balunov <kirillbalunov@gmail.com> wrote:
вт, 25 июн. 2019 г. в 21:20, Cameron Blocker <cameronjblocker@gmail.com>:
It seems to me that the general consensus is that we shouldn't be changing .T to do what we've termed matrix transpose or conjugate transpose.
Reading through this thread, I can not say that I have the same opinion - at first, many looked positively at the possibility of change - `arr.T` to mean a transpose of the last two dimensions by default, and then people start discussing several different (albeit related) topics at once. So, I want to point out that it is rather difficult to follow what is currently discussed in this thread, probably because several different (albeit related) topics are being discussed at once. I would suggest at first discuss `arr.T` change, because other topics somewhat depend on that (`arr.MT`/`arr.CT`/`arr.H` and others).
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen. Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
p.s: Documentation about `.T` shows only two examples, for 1d - to show that it works and for 2d case. Maybe it means something? (especially for new `numpy` users. )
That only means that there's a limit to the number of examples we've managed to put in docstrings. Ralf
![](https://secure.gravatar.com/avatar/864a9aae8fc483e0d7defbe91db93d15.jpg?s=120&d=mm&r=g)
Only concerns #4 from Ilhan's list. ср, 26 июн. 2019 г. в 00:01, Ralf Gommers <ralf.gommers@gmail.com>:
[....]
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen.
To begin with, I must admit that I am not familiar with the accepted policy of introducing changes to NumPy. But I find it quite nonconstructive just to say - it will not happen. What then is the point in the discussion?
Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
I fully agree that any feature can find its use, valid or not is another question. Juan did not present these examples, but I will allow myself to assume that it is more correct to describe what is being done there as a permutation, and not a transpose. In addition, in the very next sentence, Juan adds that "These could be easily changed to .transpose() (honestly they probably should!)" We're not going to change correct code for minor benefits.
It's fair, I personally have no preferences in both cases, the most important thing for me is that in the 2d case it works correctly. To be honest, until today, I thought that `.T` will raise for` ndim > 2`. At least that's what my experience told me. For example in Matlab - Error using .' Transpose on ND array is not defined. Use PERMUTE instead. Julia - transpose not defined for Array(Float64, 3). Consider using permutedims for higher-dimensional arrays. Sympy - raise ValueError("array rank not 2") Here, I agree with the authors that, to begin with, `transpose` is not the best name, since in general it doesn’t fit as an any mathematical definition (of course it will depend on what we take as an element) or a definition from linear algebra. Thus the name `transpose` only leads to confusion. For a note about another suggestion - `.T` to mean a transpose of the last two dimensions, in Mathematica authors for some reason did the opposite (personally, I could not understand why they made such a choice :) ): Transpose[list] transposes the first two levels in list. I feel strongly that we should have the following policy:
* Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy.
I find this overly strict rules that do not allow to evolve. I completely agree that a silent change in behavior is a disaster, that changing behavior (if it is not an error) in the same minor version (1.X.Y) is not acceptable, but I see no reason to extend this rule for a major version bump (2.A.B.), especially if it allows something to improve. I would see such a rough version of a roadmap of change (I foresee my loneliness in this :)) Also considering this comment Personally I would find any divergence between a.T and a.transpose()
to be rather surprising.
it will be as follows: 1. in 1.18 add the `.permute` method to the array, with the same semantics as `.transpose`. 2. Starting from 1.18, emit `FutureWarning`, ` DeprectationWarning` for `.transpose` and advise replacing it with `.permute`. 3. Starting from 1.18 for `.T` with` ndim> 2`, emit a `FutureWarning`, with a note that in future versions the behavior will change. 4. In version 2, remove the `.transpose` and change the behavior for `.T`. Regarding `.T` with` ndim> 2` - I don’t have preferences between error or transpose of the last two dimensions. with kind regards, -gdg
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, Jun 26, 2019 at 10:04 PM Kirill Balunov <kirillbalunov@gmail.com> wrote:
Only concerns #4 from Ilhan's list.
ср, 26 июн. 2019 г. в 00:01, Ralf Gommers <ralf.gommers@gmail.com>:
[....]
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen.
To begin with, I must admit that I am not familiar with the accepted policy of introducing changes to NumPy. But I find it quite nonconstructive just to say - it will not happen. What then is the point in the discussion?
There has been a *very* long discussion already, and several others on the same topic before. There are also long-standing ways of dealing with backwards compatibility - e.g. what Matthew said is not new, it's an agreed upon way of working. http://www.numpy.org/neps/nep-0023-backwards-compatibility.html lists some principles. That NEP is not yet accepted (it needs rework), but it gives a good idea of what does and does not go.
Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
I fully agree that any feature can find its use, valid or not is another question. Juan did not present these examples, but I will allow myself to assume that it is more correct to describe what is being done there as a permutation, and not a transpose. In addition, in the very next sentence, Juan adds that "These could be easily changed to .transpose() (honestly they probably should!)"
We're not going to change correct code for minor benefits.
It's fair, I personally have no preferences in both cases, the most important thing for me is that in the 2d case it works correctly. To be honest, until today, I thought that `.T` will raise for` ndim > 2`. At least that's what my experience told me. For example in
Matlab - Error using .' Transpose on ND array is not defined. Use PERMUTE instead.
Julia - transpose not defined for Array(Float64, 3). Consider using permutedims for higher-dimensional arrays.
Sympy - raise ValueError("array rank not 2")
Here, I agree with the authors that, to begin with, `transpose` is not the best name, since in general it doesn’t fit as an any mathematical definition (of course it will depend on what we take as an element) or a definition from linear algebra. Thus the name `transpose` only leads to confusion.
For a note about another suggestion - `.T` to mean a transpose of the last two dimensions, in Mathematica authors for some reason did the opposite (personally, I could not understand why they made such a choice :) ):
Transpose[list] transposes the first two levels in list.
I feel strongly that we should have the following policy:
* Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy.
I find this overly strict rules that do not allow to evolve. I completely agree that a silent change in behavior is a disaster, that changing behavior (if it is not an error) in the same minor version (1.X.Y) is not acceptable, but I see no reason to extend this rule for a major version bump (2.A.B.), especially if it allows something to improve.
I'm sorry, you'll have to live with this rule. We've had lots of discussion about this rule in many concrete cases. When existing code is buggy or is consistently confusing many users, we can discuss. But in general changing old code to do something else is a terrible idea.
I would see such a rough version of a roadmap of change (I foresee my loneliness in this :)) Also considering this comment
Personally I would find any divergence between a.T and a.transpose()
to be rather surprising.
it will be as follows:
1. in 1.18 add the `.permute` method to the array, with the same semantics as `.transpose`. 2. Starting from 1.18, emit `FutureWarning`, ` DeprectationWarning` for `.transpose` and advise replacing it with `.permute`. 3. Starting from 1.18 for `.T` with` ndim> 2`, emit a `FutureWarning`, with a note that in future versions the behavior will change. 4. In version 2, remove the `.transpose` and change the behavior for `.T`.
This is simply not enough. Many users will skip versions when upgrading. There must be an exceptionally good reason to change numerical results, and this simply is not one. Cheers, Ralf
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Hi Ralf, I realize you feel strongly that this whole thread is rehashing history, but I think it is worth pointing out that many seem to consider that the criterion for allowing backward incompatible changes, i.e., that "existing code is buggy or is consistently confusing many users", is actually fulfilled here. Indeed, this appears true to such an extent that even those among the steering council do not agree: while the topic of this thread was about introducing *new* properties (because in the relevant issue I had suggested to Steward it was not possible to change .T), it was Eric who brought up the question whether we shouldn't just change `.T` after all. And in the relevant issue, Sebastian noted that "I am not quite convinced that we cannot change .T (at least in the sense of deprecation) myself", with Chuck chiming in that "I don't recall being in opposition, and I also think the current transpose is not what we want." That makes three of your fellow steering council members who are not sure, despite all the previous discussions (of which Chuck surely has seen most - sorry, Chuck!). It seems to me the only sure way in which we can avoid future discussions is to actually address the underlying problem. E.g., is the cost of deprecating & changing .T truly that much more than even having this discussion? All the best, Marten On Wed, Jun 26, 2019 at 4:18 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Jun 26, 2019 at 10:04 PM Kirill Balunov <kirillbalunov@gmail.com> wrote:
Only concerns #4 from Ilhan's list.
ср, 26 июн. 2019 г. в 00:01, Ralf Gommers <ralf.gommers@gmail.com>:
[....]
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen.
To begin with, I must admit that I am not familiar with the accepted policy of introducing changes to NumPy. But I find it quite nonconstructive just to say - it will not happen. What then is the point in the discussion?
There has been a *very* long discussion already, and several others on the same topic before. There are also long-standing ways of dealing with backwards compatibility - e.g. what Matthew said is not new, it's an agreed upon way of working. http://www.numpy.org/neps/nep-0023-backwards-compatibility.html lists some principles. That NEP is not yet accepted (it needs rework), but it gives a good idea of what does and does not go.
Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
I fully agree that any feature can find its use, valid or not is another question. Juan did not present these examples, but I will allow myself to assume that it is more correct to describe what is being done there as a permutation, and not a transpose. In addition, in the very next sentence, Juan adds that "These could be easily changed to .transpose() (honestly they probably should!)"
We're not going to change correct code for minor benefits.
It's fair, I personally have no preferences in both cases, the most important thing for me is that in the 2d case it works correctly. To be honest, until today, I thought that `.T` will raise for` ndim > 2`. At least that's what my experience told me. For example in
Matlab - Error using .' Transpose on ND array is not defined. Use PERMUTE instead.
Julia - transpose not defined for Array(Float64, 3). Consider using permutedims for higher-dimensional arrays.
Sympy - raise ValueError("array rank not 2")
Here, I agree with the authors that, to begin with, `transpose` is not the best name, since in general it doesn’t fit as an any mathematical definition (of course it will depend on what we take as an element) or a definition from linear algebra. Thus the name `transpose` only leads to confusion.
For a note about another suggestion - `.T` to mean a transpose of the last two dimensions, in Mathematica authors for some reason did the opposite (personally, I could not understand why they made such a choice :) ):
Transpose[list] transposes the first two levels in list.
I feel strongly that we should have the following policy:
* Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy.
I find this overly strict rules that do not allow to evolve. I completely agree that a silent change in behavior is a disaster, that changing behavior (if it is not an error) in the same minor version (1.X.Y) is not acceptable, but I see no reason to extend this rule for a major version bump (2.A.B.), especially if it allows something to improve.
I'm sorry, you'll have to live with this rule. We've had lots of discussion about this rule in many concrete cases. When existing code is buggy or is consistently confusing many users, we can discuss. But in general changing old code to do something else is a terrible idea.
I would see such a rough version of a roadmap of change (I foresee my loneliness in this :)) Also considering this comment
Personally I would find any divergence between a.T and a.transpose()
to be rather surprising.
it will be as follows:
1. in 1.18 add the `.permute` method to the array, with the same semantics as `.transpose`. 2. Starting from 1.18, emit `FutureWarning`, ` DeprectationWarning` for `.transpose` and advise replacing it with `.permute`. 3. Starting from 1.18 for `.T` with` ndim> 2`, emit a `FutureWarning`, with a note that in future versions the behavior will change. 4. In version 2, remove the `.transpose` and change the behavior for `.T`.
This is simply not enough. Many users will skip versions when upgrading. There must be an exceptionally good reason to change numerical results, and this simply is not one.
Cheers, Ralf
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Wed, 2019-06-26 at 17:22 -0400, Marten van Kerkwijk wrote:
Hi Ralf,
I realize you feel strongly that this whole thread is rehashing history, but I think it is worth pointing out that many seem to consider that the criterion for allowing backward incompatible changes, i.e., that "existing code is buggy or is consistently confusing many users", is actually fulfilled here.
Indeed, this appears true to such an extent that even those among the steering council do not agree: while the topic of this thread was about introducing *new* properties (because in the relevant issue I had suggested to Steward it was not possible to change .T), it was Eric who brought up the question whether we shouldn't just change `.T` after all. And in the relevant issue, Sebastian noted that "I am not quite convinced that we cannot change .T (at least in the sense of deprecation) myself", with Chuck chiming in that "I don't recall being in opposition, and I also think the current transpose is not what we want."
That makes three of your fellow steering council members who are not sure, despite all the previous discussions (of which Chuck surely has seen most - sorry, Chuck!).
It seems to me the only sure way in which we can avoid future discussions is to actually address the underlying problem. E.g., is the cost of deprecating & changing .T truly that much more than even having this discussion?
To me, I think what we have here is simply that if we want to do it, it will be an uphill battle. And an uphill battle may mean that we have to write something close to an NEP. Including seeing how much code blows up, e.g. by providing an environment variable switchable behaviour or so. I think it would be better to approach it from that side: What is necessary to be convincing enough? The problem of going from one behaviour to another (at least without an many-year waiting period) is real though, it not uncommon to leave scripts lying around for 5 years... So in that sense, I would agree that to really switch behaviour (not just error), it would need extremely careful analysis, which may not be feasible. OTOH, some other options, such as a new name or deprecations (or warning) do not have such fundamental problems. Quite honestly, I am not sure that deprecating `.T` completely for high dimensions is much more painful then e.g. the move of factorial in scipy, which forced me to modify a lot of my scripts (ok, its search+replace instead of replacing one line). We could go further of course, and say we do a "painful major" release at some point with things like py3k warnings and all. But we probably need more good reasons than a `.T`, and in-person discussions before even considering it. Best, Sebastian
All the best,
Marten
On Wed, Jun 26, 2019 at 4:18 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Jun 26, 2019 at 10:04 PM Kirill Balunov < kirillbalunov@gmail.com> wrote:
Only concerns #4 from Ilhan's list.
ср, 26 июн. 2019 г. в 00:01, Ralf Gommers <ralf.gommers@gmail.com
: [....]
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen.
To begin with, I must admit that I am not familiar with the accepted policy of introducing changes to NumPy. But I find it quite nonconstructive just to say - it will not happen. What then is the point in the discussion?
There has been a *very* long discussion already, and several others on the same topic before. There are also long-standing ways of dealing with backwards compatibility - e.g. what Matthew said is not new, it's an agreed upon way of working. http://www.numpy.org/neps/nep-0023-backwards-compatibility.html lists some principles. That NEP is not yet accepted (it needs rework), but it gives a good idea of what does and does not go.
Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
I fully agree that any feature can find its use, valid or not is another question. Juan did not present these examples, but I will allow myself to assume that it is more correct to describe what is being done there as a permutation, and not a transpose. In addition, in the very next sentence, Juan adds that "These could be easily changed to .transpose() (honestly they probably should!)"
We're not going to change correct code for minor benefits.
It's fair, I personally have no preferences in both cases, the most important thing for me is that in the 2d case it works correctly. To be honest, until today, I thought that `.T` will raise for` ndim > 2`. At least that's what my experience told me. For example in
Matlab - Error using .' Transpose on ND array is not defined. Use PERMUTE instead.
Julia - transpose not defined for Array(Float64, 3). Consider using permutedims for higher-dimensional arrays.
Sympy - raise ValueError("array rank not 2")
Here, I agree with the authors that, to begin with, `transpose` is not the best name, since in general it doesn’t fit as an any mathematical definition (of course it will depend on what we take as an element) or a definition from linear algebra. Thus the name `transpose` only leads to confusion.
For a note about another suggestion - `.T` to mean a transpose of the last two dimensions, in Mathematica authors for some reason did the opposite (personally, I could not understand why they made such a choice :) ):
Transpose[list] transposes the first two levels in list.
I feel strongly that we should have the following policy:
* Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy.
I find this overly strict rules that do not allow to evolve. I completely agree that a silent change in behavior is a disaster, that changing behavior (if it is not an error) in the same minor version (1.X.Y) is not acceptable, but I see no reason to extend this rule for a major version bump (2.A.B.), especially if it allows something to improve.
I'm sorry, you'll have to live with this rule. We've had lots of discussion about this rule in many concrete cases. When existing code is buggy or is consistently confusing many users, we can discuss. But in general changing old code to do something else is a terrible idea.
I would see such a rough version of a roadmap of change (I foresee my loneliness in this :)) Also considering this comment
Personally I would find any divergence between a.T and a.transpose() to be rather surprising.
it will be as follows:
1. in 1.18 add the `.permute` method to the array, with the same semantics as `.transpose`. 2. Starting from 1.18, emit `FutureWarning`, ` DeprectationWarning` for `.transpose` and advise replacing it with `.permute`. 3. Starting from 1.18 for `.T` with` ndim> 2`, emit a `FutureWarning`, with a note that in future versions the behavior will change. 4. In version 2, remove the `.transpose` and change the behavior for `.T`.
This is simply not enough. Many users will skip versions when upgrading. There must be an exceptionally good reason to change numerical results, and this simply is not one.
Cheers, Ralf
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, Jun 26, 2019 at 11:24 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Ralf,
I realize you feel strongly that this whole thread is rehashing history,
The .H part was. But Cameron volunteered to work on a solution that satisfies all concerns. but I think it is worth pointing out that many seem to consider that the
criterion for allowing backward incompatible changes, i.e., that "existing code is buggy or is consistently confusing many users", is actually fulfilled here.
Indeed, this appears true to such an extent that even those among the steering council do not agree: while the topic of this thread was about introducing *new* properties (because in the relevant issue I had suggested to Steward it was not possible to change .T), it was Eric who brought up the question whether we shouldn't just change `.T` after all.
Yes, and then he came up with a better suggestion in https://github.com/numpy/numpy/issues/13835. If a comment starts with "this may be contentious" and then real-world correct code like in scikit-image shows up, it's better not to dig it back up after 60 emails to make your point..... And in the relevant issue, Sebastian noted that "I am not quite convinced
that we cannot change .T (at least in the sense of deprecation)
deprecation I interpret as "to raise an error after" myself", with Chuck chiming in that "I don't recall being in opposition,
and I also think the current transpose is not what we want."
That makes three of your fellow steering council members who are not sure, despite all the previous discussions (of which Chuck surely has seen most - sorry, Chuck!).
It seems to me the only sure way in which we can avoid future discussions is to actually address the underlying problem. E.g., is the cost of deprecating & changing .T truly that much more than even having this discussion?
Yes it is. Seriously, every time someone proposes something like this, eventually a better solution is found. Raising for .T on >2-D is a possibility, in case the problem is really *that* bad (which doesn't seem to be the case - but if so, please propose that instead). Changing the meaning of .T to give changed numerical results is not acceptable (not just my opinion, also Matthew, Alan, and Stephan said no). If you've been on this list for a few years, you really should understand that by now. I'll quote Matthew again, who said it best: "Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy. On the other hand, it's OK (with a suitable period of deprecation) for correct old code to raise an informative error with new Numpy." Cheers, Ralf
![](https://secure.gravatar.com/avatar/96dd777e397ab128fedab46af97a3a4a.jpg?s=120&d=mm&r=g)
On Wed, Jun 26, 2019 at 2:18 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Jun 26, 2019 at 10:04 PM Kirill Balunov <kirillbalunov@gmail.com> wrote:
Only concerns #4 from Ilhan's list.
ср, 26 июн. 2019 г. в 00:01, Ralf Gommers <ralf.gommers@gmail.com>:
[....]
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen.
To begin with, I must admit that I am not familiar with the accepted policy of introducing changes to NumPy. But I find it quite nonconstructive just to say - it will not happen. What then is the point in the discussion?
There has been a *very* long discussion already, and several others on the same topic before. There are also long-standing ways of dealing with backwards compatibility - e.g. what Matthew said is not new, it's an agreed upon way of working. http://www.numpy.org/neps/nep-0023-backwards-compatibility.html lists some principles. That NEP is not yet accepted (it needs rework), but it gives a good idea of what does and does not go.
Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
I fully agree that any feature can find its use, valid or not is another question. Juan did not present these examples, but I will allow myself to assume that it is more correct to describe what is being done there as a permutation, and not a transpose. In addition, in the very next sentence, Juan adds that "These could be easily changed to .transpose() (honestly they probably should!)"
We're not going to change correct code for minor benefits.
It's fair, I personally have no preferences in both cases, the most important thing for me is that in the 2d case it works correctly. To be honest, until today, I thought that `.T` will raise for` ndim > 2`. At least that's what my experience told me. For example in
Matlab - Error using .' Transpose on ND array is not defined. Use PERMUTE instead.
Julia - transpose not defined for Array(Float64, 3). Consider using permutedims for higher-dimensional arrays.
Sympy - raise ValueError("array rank not 2")
Here, I agree with the authors that, to begin with, `transpose` is not the best name, since in general it doesn’t fit as an any mathematical definition (of course it will depend on what we take as an element) or a definition from linear algebra. Thus the name `transpose` only leads to confusion.
For a note about another suggestion - `.T` to mean a transpose of the last two dimensions, in Mathematica authors for some reason did the opposite (personally, I could not understand why they made such a choice :) ):
Transpose[list] transposes the first two levels in list.
I feel strongly that we should have the following policy:
* Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy.
I find this overly strict rules that do not allow to evolve. I completely agree that a silent change in behavior is a disaster, that changing behavior (if it is not an error) in the same minor version (1.X.Y) is not acceptable, but I see no reason to extend this rule for a major version bump (2.A.B.), especially if it allows something to improve.
I'm sorry, you'll have to live with this rule. We've had lots of discussion about this rule in many concrete cases. When existing code is buggy or is consistently confusing many users, we can discuss. But in general changing old code to do something else is a terrible idea.
I would see such a rough version of a roadmap of change (I foresee my loneliness in this :)) Also considering this comment
Personally I would find any divergence between a.T and a.transpose()
to be rather surprising.
it will be as follows:
1. in 1.18 add the `.permute` method to the array, with the same semantics as `.transpose`. 2. Starting from 1.18, emit `FutureWarning`, ` DeprectationWarning` for `.transpose` and advise replacing it with `.permute`. 3. Starting from 1.18 for `.T` with` ndim> 2`, emit a `FutureWarning`, with a note that in future versions the behavior will change. 4. In version 2, remove the `.transpose` and change the behavior for `.T`.
This is simply not enough. Many users will skip versions when upgrading. There must be an exceptionally good reason to change numerical results, and this simply is not one.
I agree with Ralf that `*.T` should be left alone, it is widely used and changing its behavior is bound to lead to broken code. I could see `*.mT` or `*.mH`, but I'm beginning to wonder if we would not be better served with a better matrix class that could also deal intelligently with stacks of row and column vectors. In the past I have preferred `einsum` over `@` precisely because it made handling those variations easy. The `@` operator is very convenient at a low level, but it simply cannot deal with stacks of mixed types in generality. With a class we could do something about that. Chuck
![](https://secure.gravatar.com/avatar/81e62cb212edf2a8402c842b120d9f31.jpg?s=120&d=mm&r=g)
I've finally gone through the old discussion and finally got the counter-argument in one of the Dag Sverre's replies http://numpy-discussion.10968.n7.nabble.com/add-H-attribute-tp34474p34668.ht... TL; DR I disagree with [...adding the .H attribute...] being forward looking, as
it explicitly creates a situation where code will break if .H becomes a view
This actually makes perfect sense and a valid concern that I have not considered before. The remaining question is why we treat as if returning a view is a requirement. We have been using .conj().T and receiving the copies of the arrays since that day with equally inefficient code after many years. Then the discussion diverges to other things hence I am not sure where does this requirement come from. But I guess this part should be rehashed clearer until next time :) On Thu, Jun 27, 2019 at 12:03 AM Charles R Harris <charlesr.harris@gmail.com> wrote:
On Wed, Jun 26, 2019 at 2:18 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Jun 26, 2019 at 10:04 PM Kirill Balunov <kirillbalunov@gmail.com> wrote:
Only concerns #4 from Ilhan's list.
ср, 26 июн. 2019 г. в 00:01, Ralf Gommers <ralf.gommers@gmail.com>:
[....]
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen.
To begin with, I must admit that I am not familiar with the accepted policy of introducing changes to NumPy. But I find it quite nonconstructive just to say - it will not happen. What then is the point in the discussion?
There has been a *very* long discussion already, and several others on the same topic before. There are also long-standing ways of dealing with backwards compatibility - e.g. what Matthew said is not new, it's an agreed upon way of working. http://www.numpy.org/neps/nep-0023-backwards-compatibility.html lists some principles. That NEP is not yet accepted (it needs rework), but it gives a good idea of what does and does not go.
Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
I fully agree that any feature can find its use, valid or not is another question. Juan did not present these examples, but I will allow myself to assume that it is more correct to describe what is being done there as a permutation, and not a transpose. In addition, in the very next sentence, Juan adds that "These could be easily changed to .transpose() (honestly they probably should!)"
We're not going to change correct code for minor benefits.
It's fair, I personally have no preferences in both cases, the most important thing for me is that in the 2d case it works correctly. To be honest, until today, I thought that `.T` will raise for` ndim > 2`. At least that's what my experience told me. For example in
Matlab - Error using .' Transpose on ND array is not defined. Use PERMUTE instead.
Julia - transpose not defined for Array(Float64, 3). Consider using permutedims for higher-dimensional arrays.
Sympy - raise ValueError("array rank not 2")
Here, I agree with the authors that, to begin with, `transpose` is not the best name, since in general it doesn’t fit as an any mathematical definition (of course it will depend on what we take as an element) or a definition from linear algebra. Thus the name `transpose` only leads to confusion.
For a note about another suggestion - `.T` to mean a transpose of the last two dimensions, in Mathematica authors for some reason did the opposite (personally, I could not understand why they made such a choice :) ):
Transpose[list] transposes the first two levels in list.
I feel strongly that we should have the following policy:
* Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy.
I find this overly strict rules that do not allow to evolve. I completely agree that a silent change in behavior is a disaster, that changing behavior (if it is not an error) in the same minor version (1.X.Y) is not acceptable, but I see no reason to extend this rule for a major version bump (2.A.B.), especially if it allows something to improve.
I'm sorry, you'll have to live with this rule. We've had lots of discussion about this rule in many concrete cases. When existing code is buggy or is consistently confusing many users, we can discuss. But in general changing old code to do something else is a terrible idea.
I would see such a rough version of a roadmap of change (I foresee my loneliness in this :)) Also considering this comment
Personally I would find any divergence between a.T and a.transpose()
to be rather surprising.
it will be as follows:
1. in 1.18 add the `.permute` method to the array, with the same semantics as `.transpose`. 2. Starting from 1.18, emit `FutureWarning`, ` DeprectationWarning` for `.transpose` and advise replacing it with `.permute`. 3. Starting from 1.18 for `.T` with` ndim> 2`, emit a `FutureWarning`, with a note that in future versions the behavior will change. 4. In version 2, remove the `.transpose` and change the behavior for `.T`.
This is simply not enough. Many users will skip versions when upgrading. There must be an exceptionally good reason to change numerical results, and this simply is not one.
I agree with Ralf that `*.T` should be left alone, it is widely used and changing its behavior is bound to lead to broken code. I could see `*.mT` or `*.mH`, but I'm beginning to wonder if we would not be better served with a better matrix class that could also deal intelligently with stacks of row and column vectors. In the past I have preferred `einsum` over `@` precisely because it made handling those variations easy. The `@` operator is very convenient at a low level, but it simply cannot deal with stacks of mixed types in generality. With a class we could do something about that.
Chuck _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Thu, Jun 27, 2019 at 4:19 AM Ilhan Polat <ilhanpolat@gmail.com> wrote:
I've finally gone through the old discussion and finally got the counter-argument in one of the Dag Sverre's replies http://numpy-discussion.10968.n7.nabble.com/add-H-attribute-tp34474p34668.ht...
TL; DR
I disagree with [...adding the .H attribute...] being forward looking, as
it explicitly creates a situation where code will break if .H becomes a view
This actually makes perfect sense and a valid concern that I have not considered before.
The remaining question is why we treat as if returning a view is a requirement. We have been using .conj().T and receiving the copies of the arrays since that day with equally inefficient code after many years. Then the discussion diverges to other things hence I am not sure where does this requirement come from.
I think that's in that thread somewhere in more detail, but the summary is: 1. properties imply that they're cheap computationally 2. .T returning a view and .H a copy would be inconsistent and unintuitive There may be one more argument, this is just from memory. Cheers, Ralf
But I guess this part should be rehashed clearer until next time :)
On Thu, Jun 27, 2019 at 12:03 AM Charles R Harris < charlesr.harris@gmail.com> wrote:
On Wed, Jun 26, 2019 at 2:18 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Jun 26, 2019 at 10:04 PM Kirill Balunov <kirillbalunov@gmail.com> wrote:
Only concerns #4 from Ilhan's list.
ср, 26 июн. 2019 г. в 00:01, Ralf Gommers <ralf.gommers@gmail.com>:
[....]
Perhaps not full consensus between the many people with different opinions and interests. But for the first one, arr.T change: it's clear that this won't happen.
To begin with, I must admit that I am not familiar with the accepted policy of introducing changes to NumPy. But I find it quite nonconstructive just to say - it will not happen. What then is the point in the discussion?
There has been a *very* long discussion already, and several others on the same topic before. There are also long-standing ways of dealing with backwards compatibility - e.g. what Matthew said is not new, it's an agreed upon way of working. http://www.numpy.org/neps/nep-0023-backwards-compatibility.html lists some principles. That NEP is not yet accepted (it needs rework), but it gives a good idea of what does and does not go.
Between Juan's examples of valid use, and what Stephan and Matthew said, there's not much more to add. We're not going to change correct code for minor benefits.
I fully agree that any feature can find its use, valid or not is another question. Juan did not present these examples, but I will allow myself to assume that it is more correct to describe what is being done there as a permutation, and not a transpose. In addition, in the very next sentence, Juan adds that "These could be easily changed to .transpose() (honestly they probably should!)"
We're not going to change correct code for minor benefits.
It's fair, I personally have no preferences in both cases, the most important thing for me is that in the 2d case it works correctly. To be honest, until today, I thought that `.T` will raise for` ndim > 2`. At least that's what my experience told me. For example in
Matlab - Error using .' Transpose on ND array is not defined. Use PERMUTE instead.
Julia - transpose not defined for Array(Float64, 3). Consider using permutedims for higher-dimensional arrays.
Sympy - raise ValueError("array rank not 2")
Here, I agree with the authors that, to begin with, `transpose` is not the best name, since in general it doesn’t fit as an any mathematical definition (of course it will depend on what we take as an element) or a definition from linear algebra. Thus the name `transpose` only leads to confusion.
For a note about another suggestion - `.T` to mean a transpose of the last two dimensions, in Mathematica authors for some reason did the opposite (personally, I could not understand why they made such a choice :) ):
Transpose[list] transposes the first two levels in list.
I feel strongly that we should have the following policy:
* Under no circumstances should we make changes that mean that correct old code will give different results with new Numpy.
I find this overly strict rules that do not allow to evolve. I completely agree that a silent change in behavior is a disaster, that changing behavior (if it is not an error) in the same minor version (1.X.Y) is not acceptable, but I see no reason to extend this rule for a major version bump (2.A.B.), especially if it allows something to improve.
I'm sorry, you'll have to live with this rule. We've had lots of discussion about this rule in many concrete cases. When existing code is buggy or is consistently confusing many users, we can discuss. But in general changing old code to do something else is a terrible idea.
I would see such a rough version of a roadmap of change (I foresee my loneliness in this :)) Also considering this comment
Personally I would find any divergence between a.T and a.transpose()
to be rather surprising.
it will be as follows:
1. in 1.18 add the `.permute` method to the array, with the same semantics as `.transpose`. 2. Starting from 1.18, emit `FutureWarning`, ` DeprectationWarning` for `.transpose` and advise replacing it with `.permute`. 3. Starting from 1.18 for `.T` with` ndim> 2`, emit a `FutureWarning`, with a note that in future versions the behavior will change. 4. In version 2, remove the `.transpose` and change the behavior for `.T`.
This is simply not enough. Many users will skip versions when upgrading. There must be an exceptionally good reason to change numerical results, and this simply is not one.
I agree with Ralf that `*.T` should be left alone, it is widely used and changing its behavior is bound to lead to broken code. I could see `*.mT` or `*.mH`, but I'm beginning to wonder if we would not be better served with a better matrix class that could also deal intelligently with stacks of row and column vectors. In the past I have preferred `einsum` over `@` precisely because it made handling those variations easy. The `@` operator is very convenient at a low level, but it simply cannot deal with stacks of mixed types in generality. With a class we could do something about that.
Chuck _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Hi Kirill, others, Indeed, it is becoming long! That said, while initially I was quite charmed by Eric's suggestion of deprecating and then changing `.T`, I think the well-argued opposition to it has changed my opinion. Perhaps most persuasive to me was Matthew's point just now that code (or a code snippet) that worked on an old numpy should not silently do something different on a new numpy (unless the old behaviour was a true bug, of course; but here `.T` has always had a very well-defined meaning - even though you are right that the documentation does not exactly lead the novice user away from using it for matrix transpose! If someone has the time to open a PR that clarifies it.........). Note that I do agree with the sentiment that the deprecation/change would likely expose some hidden bugs - and, as noted, it is hard to know where those bugs are if they are hidden! (FWIW, I did find some in astropy's coordinate implementation, which was initially written for scalar coordinates where `.T` worked was just fine; as a result, astropy gained a `matrix_transpose` utility function.) Still, it does not quite outweigh to me the disadvantages enumerated. One thing seems clear: if `.T` is out, that means `.H` is out as well (at least as a matrix transpose, the only sensible meaning I think it has). Having `.H` as a conjugate matrix transpose would just cause more confusion about the meaning of `.T`. For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details). So, specific items to confirm: 1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes) 2) Are `.mT` and `.mH` indeed the consensus? [1] 3) What, if anything, should these new properties do for 0-d and 1-d arrays: pass through, change shape, or error? (logically, I think *new* properties should never emit warnings: either do something or error). - In favour of pass-through: 1-d is a vector `dot` and `matmul` work fine with this; - In favour of shape change: "m" stands for matrix; can be generous on input, but should be strict on output. After all, other code may not make the same assumption that 1-d arrays are fine as row and column vectors. - In favour of error: "m" stands for matrix and the input is not a matrix! Let the user add np.newaxis in the right place, which will make the intent clear. All the best, Marten [1] Some sadness about mᵀ and mᴴ - but, then, there is http://www.modernemacs.com/post/prettify-mode/ On Tue, Jun 25, 2019 at 4:17 PM Kirill Balunov <kirillbalunov@gmail.com> wrote:
вт, 25 июн. 2019 г. в 21:20, Cameron Blocker <cameronjblocker@gmail.com>:
It seems to me that the general consensus is that we shouldn't be changing .T to do what we've termed matrix transpose or conjugate transpose.
Reading through this thread, I can not say that I have the same opinion - at first, many looked positively at the possibility of change - `arr.T` to mean a transpose of the last two dimensions by default, and then people start discussing several different (albeit related) topics at once. So, I want to point out that it is rather difficult to follow what is currently discussed in this thread, probably because several different (albeit related) topics are being discussed at once. I would suggest at first discuss `arr.T` change, because other topics somewhat depend on that (`arr.MT`/`arr.CT`/`arr.H` and others).
p.s: Documentation about `.T` shows only two examples, for 1d - to show that it works and for 2d case. Maybe it means something? (especially for new `numpy` users. )
with kind regards, -gdg _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Tue, 2019-06-25 at 17:00 -0400, Marten van Kerkwijk wrote:
Hi Kirill, others,
Indeed, it is becoming long! That said, while initially I was quite charmed by Eric's suggestion of deprecating and then changing `.T`, I think the well-argued opposition to it has changed my opinion. Perhaps most persuasive to me was Matthew's point just now that code (or a code snippet) that worked on an old numpy should not silently do something different on a new numpy (unless the old behaviour was a true bug, of course; but here `.T` has always had a very well-defined meaning - even though you are right that the documentation does not exactly lead the novice user away from using it for matrix transpose! If someone has the time to open a PR that clarifies it.........).
Note that I do agree with the sentiment that the deprecation/change would likely expose some hidden bugs - and, as noted, it is hard to know where those bugs are if they are hidden! (FWIW, I did find some in astropy's coordinate implementation, which was initially written for scalar coordinates where `.T` worked was just fine; as a result, astropy gained a `matrix_transpose` utility function.) Still, it does not quite outweigh to me the disadvantages enumerated.
True, eventually switching is much more problematic than only deprecation, and yes, I guess the last step is likely forbidding. I do not care too much, but the at least the deprecation/warning does not seem too bad to me unless it is really widely used for high dimensions. Sure, it requires to touch code and may make it uglier, but a change requiring to touch a fair amount of scripts is not all that uncommon, especially if it can find some bugs (e.g. for me scipy.misc.factorial moving for example meant I had to change a lot of scripts, annoying but I could live with it). Although, I might prefer to spend our "force users to do annoying code changes" chips on better things. And I guess there may not be much of a point in a mere deprecation.
One thing seems clear: if `.T` is out, that means `.H` is out as well (at least as a matrix transpose, the only sensible meaning I think it has). Having `.H` as a conjugate matrix transpose would just cause more confusion about the meaning of `.T`.
I tend to agree, the only way that could work seems if T was deprecated for high dimensions.
For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
It would be a nice assumption, but as I said, I do see an issue with object array support. Which makes it likely that `.H` could only be supported on some dtypes (similar to `.real/.imag`). (Strictly speaking it would be possible to make a ConugateObject dtype and define casting for it, I have some doubt that the added complexity is worth it though). The no-copy conjugate is a cool idea but ultimately may be a bit too cool?
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
It is likely the only reasonable option, unless you make `H` object which does `arr_like**H` but I doubt that is a good idea.
3) What, if anything, should these new properties do for 0-d and 1-d arrays: pass through, change shape, or error? (logically, I think *new* properties should never emit warnings: either do something or error). <snip> Marten
[1] Some sadness about mᵀ and mᴴ - but, then, there is http://www.modernemacs.com/post/prettify-mode/
Hehe, you are using a block for Phonetic Extensions, and that block has a second H which looks the same on my font but is Cyrillic. Lucky us, we could make one of them for row vectors and the other for column vectors ;). - Sebastian
On Tue, Jun 25, 2019 at 4:17 PM Kirill Balunov < kirillbalunov@gmail.com> wrote:
вт, 25 июн. 2019 г. в 21:20, Cameron Blocker < cameronjblocker@gmail.com>:
It seems to me that the general consensus is that we shouldn't be changing .T to do what we've termed matrix transpose or conjugate transpose.
Reading through this thread, I can not say that I have the same opinion - at first, many looked positively at the possibility of change - `arr.T` to mean a transpose of the last two dimensions by default, and then people start discussing several different (albeit related) topics at once. So, I want to point out that it is rather difficult to follow what is currently discussed in this thread, probably because several different (albeit related) topics are being discussed at once. I would suggest at first discuss `arr.T` change, because other topics somewhat depend on that (`arr.MT`/`arr.CT`/`arr.H` and others).
p.s: Documentation about `.T` shows only two examples, for 1d - to show that it works and for 2d case. Maybe it means something? (especially for new `numpy` users. )
with kind regards, -gdg _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/209654202cde8ec709dee0a4d23c717d.jpg?s=120&d=mm&r=g)
One other approach here that perhaps treads a little too close to np.matrix: class MatrixOpWrapper: def __init__(self, arr): # todo: accept axis arguments here? self._array = arr # todo: assert that arr.ndim >= 2 / call atleast1d @property def T(self): return linalg.transpose(self._array) @property def H(self): return M(self._array.conj()).T # add .I too? M = MatrixOpWrapper So M(arr).T instead of arr.mT, which has the benefit of not expanding the number of ndarray members (and those needed by duck-types) further. On Tue, 25 Jun 2019 at 14:50, Sebastian Berg <sebastian@sipsolutions.net> wrote:
On Tue, 2019-06-25 at 17:00 -0400, Marten van Kerkwijk wrote:
Hi Kirill, others,
Indeed, it is becoming long! That said, while initially I was quite charmed by Eric's suggestion of deprecating and then changing `.T`, I think the well-argued opposition to it has changed my opinion. Perhaps most persuasive to me was Matthew's point just now that code (or a code snippet) that worked on an old numpy should not silently do something different on a new numpy (unless the old behaviour was a true bug, of course; but here `.T` has always had a very well-defined meaning - even though you are right that the documentation does not exactly lead the novice user away from using it for matrix transpose! If someone has the time to open a PR that clarifies it.........).
Note that I do agree with the sentiment that the deprecation/change would likely expose some hidden bugs - and, as noted, it is hard to know where those bugs are if they are hidden! (FWIW, I did find some in astropy's coordinate implementation, which was initially written for scalar coordinates where `.T` worked was just fine; as a result, astropy gained a `matrix_transpose` utility function.) Still, it does not quite outweigh to me the disadvantages enumerated.
True, eventually switching is much more problematic than only deprecation, and yes, I guess the last step is likely forbidding.
I do not care too much, but the at least the deprecation/warning does not seem too bad to me unless it is really widely used for high dimensions. Sure, it requires to touch code and may make it uglier, but a change requiring to touch a fair amount of scripts is not all that uncommon, especially if it can find some bugs (e.g. for me scipy.misc.factorial moving for example meant I had to change a lot of scripts, annoying but I could live with it).
Although, I might prefer to spend our "force users to do annoying code changes" chips on better things. And I guess there may not be much of a point in a mere deprecation.
One thing seems clear: if `.T` is out, that means `.H` is out as well (at least as a matrix transpose, the only sensible meaning I think it has). Having `.H` as a conjugate matrix transpose would just cause more confusion about the meaning of `.T`.
I tend to agree, the only way that could work seems if T was deprecated for high dimensions.
For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
It would be a nice assumption, but as I said, I do see an issue with object array support. Which makes it likely that `.H` could only be supported on some dtypes (similar to `.real/.imag`). (Strictly speaking it would be possible to make a ConugateObject dtype and define casting for it, I have some doubt that the added complexity is worth it though). The no-copy conjugate is a cool idea but ultimately may be a bit too cool?
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
It is likely the only reasonable option, unless you make `H` object which does `arr_like**H` but I doubt that is a good idea.
3) What, if anything, should these new properties do for 0-d and 1-d arrays: pass through, change shape, or error? (logically, I think *new* properties should never emit warnings: either do something or error). <snip> Marten
[1] Some sadness about mᵀ and mᴴ - but, then, there is http://www.modernemacs.com/post/prettify-mode/
Hehe, you are using a block for Phonetic Extensions, and that block has a second H which looks the same on my font but is Cyrillic. Lucky us, we could make one of them for row vectors and the other for column vectors ;).
- Sebastian
On Tue, Jun 25, 2019 at 4:17 PM Kirill Balunov < kirillbalunov@gmail.com> wrote:
вт, 25 июн. 2019 г. в 21:20, Cameron Blocker < cameronjblocker@gmail.com>:
It seems to me that the general consensus is that we shouldn't be changing .T to do what we've termed matrix transpose or conjugate transpose.
Reading through this thread, I can not say that I have the same opinion - at first, many looked positively at the possibility of change - `arr.T` to mean a transpose of the last two dimensions by default, and then people start discussing several different (albeit related) topics at once. So, I want to point out that it is rather difficult to follow what is currently discussed in this thread, probably because several different (albeit related) topics are being discussed at once. I would suggest at first discuss `arr.T` change, because other topics somewhat depend on that (`arr.MT`/`arr.CT`/`arr.H` and others).
p.s: Documentation about `.T` shows only two examples, for 1d - to show that it works and for 2d case. Maybe it means something? (especially for new `numpy` users. )
with kind regards, -gdg _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
For the record, this is not an implementation detail. It was the consensus before that `H` is a bad idea unless it returns a view just like `T`: https://github.com/numpy/numpy/issues/8882
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
I think `H` would be good to revisit *if* it can be made to return a view. I think a tweak on `T` for >2-D input does not meet the bar for inclusion. Cheers, Ralf
3) What, if anything, should these new properties do for 0-d and 1-d arrays: pass through, change shape, or error? (logically, I think *new* properties should never emit warnings: either do something or error). - In favour of pass-through: 1-d is a vector `dot` and `matmul` work fine with this; - In favour of shape change: "m" stands for matrix; can be generous on input, but should be strict on output. After all, other code may not make the same assumption that 1-d arrays are fine as row and column vectors. - In favour of error: "m" stands for matrix and the input is not a matrix! Let the user add np.newaxis in the right place, which will make the intent clear.
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
Hi Ralf, On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
For the record, this is not an implementation detail. It was the consensus before that `H` is a bad idea unless it returns a view just like `T`: https://github.com/numpy/numpy/issues/8882
Is there more than an issue in which Nathaniel rejecting it mentioning some previous consensus? I was part of the discussion of the complex conjugate dtype, but do not recall any consensus beyond a "wish to keep properties simple". Certainly the "property does not do any calculation" rule seems arbitrary; the only strict rule I would apply myself is that the computation should not be able to fail (computationally, out-of-memory does not count; that's like large integer overflow). So, I'd definitely agree with you if we were discussion a property `.I` for matrix inverse (and indeed have said so in related issues). But for .H, not so much. Certainly whoever wrote np.Matrix didn't seem to feel bound by it. Note that for *matrix* transpose (as opposed to general axis reordering with .tranpose()), I see far less use for what is returned being a writable view. Indeed, for conjugate transpose, I would rather never allow writing back even if it we had the conjugate dtype since one would surely get it wrong (likely, `.conj()` would also return a read-only view, at least by default; perhaps one should even go as far as only allowing `a.view(conjugate-dtype)` as they way to get a writable view).
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
I think `H` would be good to revisit *if* it can be made to return a view. I think a tweak on `T` for >2-D input does not meet the bar for inclusion.
Well, I guess it is obvious I disagree: I think this more than meets the bar for inclusion. To me, this certainly is a much bigger deal that something like oindex or vindex (which I do like). Indeed, it would seem to me that if a visually more convenient way to do (stacks of) matrix multiplication for numpy is good enough to warrant changing the python syntax, then surely having a visually more convenient standard way to do matrix transpose should not be considered off-limits for ndarray; how often do you see a series matrix manipulations that does not involve both multiplication and transpose? It certainly doesn't seem to me much of an argument that someone previously decided to use .T for a shortcut for the computer scientist idea of transpose to not allow the mathematical/physical-scientist one - one I would argue is guaranteed to be used much more. The latter of course just repeats what many others have written above, but since given that you call it a "tweak", perhaps it is worth backing up. For astropy, a quick grep gives: - 28 uses of the matrix_transpose function I wrote because numpy doesn't have even a simple function for that and the people who wrote the original code used the Matrix class which had the proper .T (but doesn't extend to multiple dimensions; we might still be using it otherwise). - 11 uses of .T, all of which seem to be on 2-D arrays and are certainly used as if they were matrix transpose (most are for fitting). Certainly, all of these are bugs lying in waiting if the arrays ever get to be >2-D. All the best, Marten
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, Jun 26, 2019 at 3:56 AM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Ralf,
On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
For the record, this is not an implementation detail. It was the consensus before that `H` is a bad idea unless it returns a view just like `T`: https://github.com/numpy/numpy/issues/8882
Is there more than an issue in which Nathaniel rejecting it mentioning some previous consensus?
Yes, this has been discussed in lots of detail before, also on this list (as Nathaniel mentioned in the issue). I spent 10 minutes to try and find it but that wasn't enough. I do think it's not necessarily my responsibility though to dig up all the history here - that should be on the proposers of a new feature .... I was part of the discussion of the complex conjugate dtype, but do not
recall any consensus beyond a "wish to keep properties simple". Certainly the "property does not do any calculation" rule seems arbitrary; the only strict rule I would apply myself is that the computation should not be able to fail (computationally, out-of-memory does not count; that's like large integer overflow). So, I'd definitely agree with you if we were discussion a property `.I` for matrix inverse (and indeed have said so in related issues). But for .H, not so much. Certainly whoever wrote np.Matrix didn't seem to feel bound by it.
Note that for *matrix* transpose (as opposed to general axis reordering with .tranpose()), I see far less use for what is returned being a writable view. Indeed, for conjugate transpose, I would rather never allow writing back even if it we had the conjugate dtype since one would surely get it wrong (likely, `.conj()` would also return a read-only view, at least by default; perhaps one should even go as far as only allowing `a.view(conjugate-dtype)` as they way to get a writable view).
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
I think `H` would be good to revisit *if* it can be made to return a view. I think a tweak on `T` for >2-D input does not meet the bar for inclusion.
Well, I guess it is obvious I disagree: I think this more than meets the bar for inclusion. To me, this certainly is a much bigger deal that something like oindex or vindex (which I do like).
Honestly, I don't really want to be arguing against this (or even be forced to spend time following along here). My main problem with this proposal right now is that we've had this discussion multiple times, and it was rejected with solid arguments after taking up a lot of time. Restarting that discussion from scratch without considering the history feels wrong. It's like a democracy voting on becoming a dictatorship repeatedly: you can have a "no" vote several times, but if you rerun the vote often enough at some point you'll get a "yes", and then it's a done deal. I think this requires a serious write-up, as either a NEP or a GitHub issue with a good set of cross-links and addressing all previous arguments.
Indeed, it would seem to me that if a visually more convenient way to do (stacks of) matrix multiplication for numpy is good enough to warrant changing the python syntax, then surely having a visually more convenient standard way to do matrix transpose should not be considered off-limits for ndarray; how often do you see a series matrix manipulations that does not involve both multiplication and transpose?
It certainly doesn't seem to me much of an argument that someone previously decided to use .T for a shortcut for the computer scientist idea of transpose to not allow the mathematical/physical-scientist one - one I would argue is guaranteed to be used much more.
The latter of course just repeats what many others have written above, but since given that you call it a "tweak", perhaps it is worth backing up. For astropy, a quick grep gives:
- 28 uses of the matrix_transpose function I wrote because numpy doesn't have even a simple function for that and the people who wrote the original code used the Matrix class which had the proper .T (but doesn't extend to multiple dimensions; we might still be using it otherwise).
A utility function in scipy.linalg would be a more low-API-impact approach to addressing this.
- 11 uses of .T, all of which seem to be on 2-D arrays and are certainly used as if they were matrix transpose (most are for fitting). Certainly, all of these are bugs lying in waiting if the arrays ever get to be >2-D.
Most linalg is 2-D, that's why numpy.matrix and scipy.sparse matrices are 2-D only. If it's a real worry for those 11 cases, you could just add some comments or tests that prevent introducing bugs. More importantly, your assumption that >2-D arrays are "stacks of matrices" and that other usage is for "computer scientists" is arguably incorrect. There are many use cases for 3-D and higher-dimensional arrays that are not just "vectorized matrix math". As a physicist, I've done lots of work with 3-D and 4-D grids for everything from quantum physics to engineering problems in semiconductor equipment. NumPy is great for that, and I've never needed >=3-D linalg for any of it (and transposing is useful). So please don't claim the physicial-scientist view for this:) Cheers, Ralf
![](https://secure.gravatar.com/avatar/81e62cb212edf2a8402c842b120d9f31.jpg?s=120&d=mm&r=g)
Maybe a bit of a grouping would help, because I am also losing track here. Let's see if I could manage to get something sensible because, just like Marten mentioned, I am confusing myself even when I am thinking about this 1- Transpose operation on 1D arrays: This is a well-known confusion point for anyone that arrives at NumPy usage from, say matlab background or any linear algebra based user. Andras mentioned already that this is a subset of NumPy users so we have to be careful about the user assumptions. 1D arrays are computational constructs and mathematically they don't exist and this is the basis that matlab enforced since day 1. Any numerical object is an at least 2D array including scalars hence transposition flips the dimensions even for a col vector or row vector. That doesn't mean we cannot change it or we need to follow matlab but this is kind of what anybody kinda sorta wouda expect. For some historical reason, on numpy side transposition on 1D arrays did nothing since they have single dimensions. Hence you have to create a 2D vector for transpose from the get go to match the linear algebra intuition. Points that has been discussed so far are about whether we should go further and even intercept this behavior such that 1D transpose gives errors or warnings as opposed to the current behavior of silent no-op. as far as I can tell, we have a consensus that this behavior is here to stay for the foreseeable future. 2- Using transpose to reshape the (complex) array or flip its dimensions This is a usage that has been mentioned above that I don't know much about. I usually go the "reshape() et al." way for this but apparently folks use it to flip dimensions and they don't want the automatically conjugation which is exactly the opposite of a linear algebra oriented user is used to have as an adjoint operator. Therefore points that have been discussed about are whether to inject conjugation into .T behavior of complex arrays or not. If not can we have an extra .H or something that specifically does .conj().T together (or .T.conj() order doesn't matter). The main feel (that I got so far) is that we shouldn't touch the current way and hopefully bring in another attribute. 3- Having a shorthand notation such as .H or .mH etc. If the previous assertion is true then the issue becomes what should be the new name of the attribute and how can it have the nice properties of a transpose such as returning a view etc. However this has been proposed and rejected before e.g., GH-8882 and GH-13797. There is a catch here though, because if the alternative is .conj().T then it doesn't matter whether it copies or not because .conj().T doesn't return a view either and therefore the user receives a new array anyways. Therefore no benefits lost. Since the idea is to have a shorthand notation, it seems to me that this point is artificial in that sense and not necessarily a valid argument for rejection. But from the reluctance of Ralf I feel like there is a historical wear-out on this subject. 4- transpose of 3+D arrays I think we missed the bus on this one for changing the default behavior now and there are glimpses of confirmation of this above in the previous mails. I would suggest discussing this separately. So if you are not already worn out and not feeling sour about it, I would like to propose the discussion of item 3 opened once again. Because the need is real and we don't need to get choked on the implementation details right away. Disclaimer: I do applied math so I have a natural bias towards the linalg-y way of doing things. And sorry about that if I did that above, sometimes typing quickly loses the intention. Best, ilhan On Wed, Jun 26, 2019 at 4:39 AM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Jun 26, 2019 at 3:56 AM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
Hi Ralf,
On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
For the record, this is not an implementation detail. It was the consensus before that `H` is a bad idea unless it returns a view just like `T`: https://github.com/numpy/numpy/issues/8882
Is there more than an issue in which Nathaniel rejecting it mentioning some previous consensus?
Yes, this has been discussed in lots of detail before, also on this list (as Nathaniel mentioned in the issue). I spent 10 minutes to try and find it but that wasn't enough. I do think it's not necessarily my responsibility though to dig up all the history here - that should be on the proposers of a new feature ....
I was part of the discussion of the complex conjugate dtype, but do not
recall any consensus beyond a "wish to keep properties simple". Certainly the "property does not do any calculation" rule seems arbitrary; the only strict rule I would apply myself is that the computation should not be able to fail (computationally, out-of-memory does not count; that's like large integer overflow). So, I'd definitely agree with you if we were discussion a property `.I` for matrix inverse (and indeed have said so in related issues). But for .H, not so much. Certainly whoever wrote np.Matrix didn't seem to feel bound by it.
Note that for *matrix* transpose (as opposed to general axis reordering with .tranpose()), I see far less use for what is returned being a writable view. Indeed, for conjugate transpose, I would rather never allow writing back even if it we had the conjugate dtype since one would surely get it wrong (likely, `.conj()` would also return a read-only view, at least by default; perhaps one should even go as far as only allowing `a.view(conjugate-dtype)` as they way to get a writable view).
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
I think `H` would be good to revisit *if* it can be made to return a view. I think a tweak on `T` for >2-D input does not meet the bar for inclusion.
Well, I guess it is obvious I disagree: I think this more than meets the bar for inclusion. To me, this certainly is a much bigger deal that something like oindex or vindex (which I do like).
Honestly, I don't really want to be arguing against this (or even be forced to spend time following along here). My main problem with this proposal right now is that we've had this discussion multiple times, and it was rejected with solid arguments after taking up a lot of time. Restarting that discussion from scratch without considering the history feels wrong. It's like a democracy voting on becoming a dictatorship repeatedly: you can have a "no" vote several times, but if you rerun the vote often enough at some point you'll get a "yes", and then it's a done deal.
I think this requires a serious write-up, as either a NEP or a GitHub issue with a good set of cross-links and addressing all previous arguments.
Indeed, it would seem to me that if a visually more convenient way to do (stacks of) matrix multiplication for numpy is good enough to warrant changing the python syntax, then surely having a visually more convenient standard way to do matrix transpose should not be considered off-limits for ndarray; how often do you see a series matrix manipulations that does not involve both multiplication and transpose?
It certainly doesn't seem to me much of an argument that someone previously decided to use .T for a shortcut for the computer scientist idea of transpose to not allow the mathematical/physical-scientist one - one I would argue is guaranteed to be used much more.
The latter of course just repeats what many others have written above, but since given that you call it a "tweak", perhaps it is worth backing up. For astropy, a quick grep gives:
- 28 uses of the matrix_transpose function I wrote because numpy doesn't have even a simple function for that and the people who wrote the original code used the Matrix class which had the proper .T (but doesn't extend to multiple dimensions; we might still be using it otherwise).
A utility function in scipy.linalg would be a more low-API-impact approach to addressing this.
- 11 uses of .T, all of which seem to be on 2-D arrays and are certainly used as if they were matrix transpose (most are for fitting). Certainly, all of these are bugs lying in waiting if the arrays ever get to be >2-D.
Most linalg is 2-D, that's why numpy.matrix and scipy.sparse matrices are 2-D only. If it's a real worry for those 11 cases, you could just add some comments or tests that prevent introducing bugs.
More importantly, your assumption that >2-D arrays are "stacks of matrices" and that other usage is for "computer scientists" is arguably incorrect. There are many use cases for 3-D and higher-dimensional arrays that are not just "vectorized matrix math". As a physicist, I've done lots of work with 3-D and 4-D grids for everything from quantum physics to engineering problems in semiconductor equipment. NumPy is great for that, and I've never needed >=3-D linalg for any of it (and transposing is useful). So please don't claim the physicial-scientist view for this:)
Cheers, Ralf
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/03f2d50ce2e8d713af6058d2aeafab74.jpg?s=120&d=mm&r=g)
Dear Ilhan, Thanks for writing these up. I feel that from a usability standpoint most people would support #3 (.H/.mH), especially considering Marten's very good argument about @. Having to wrap your transposed matrices in function calls half defeats the purpose of being able to write stacked matrix operations elegantly within the ndarray class. The question is of course whether it's feasible from a project management/API design stand point (just to state the obvious). Regarding #1 (1d transpose): I just want to make it clear as someone who switched from MATLAB to python (and couldn't be happier) that we should treat MATLAB's behaviour as more of a cautionary tale rather than design ethos. I paused for exactly 5 seconds the first time I ran into the no-op of 1d transposes, and then I thought "yes, this makes sense", and that was it. To put it differently, I think it's more about MATLAB injecting false assumptions into users than about numpy behaving surprisingly. (On a side note, MATLAB's quirks are one of the reasons that the Spyder IDE, designed to be a MATLAB replacement, has very weird quirks that regularly trip up python users.) Regards, András On Wed, Jun 26, 2019 at 9:04 AM Ilhan Polat <ilhanpolat@gmail.com> wrote:
Maybe a bit of a grouping would help, because I am also losing track here. Let's see if I could manage to get something sensible because, just like Marten mentioned, I am confusing myself even when I am thinking about this
1- Transpose operation on 1D arrays: This is a well-known confusion point for anyone that arrives at NumPy usage from, say matlab background or any linear algebra based user. Andras mentioned already that this is a subset of NumPy users so we have to be careful about the user assumptions. 1D arrays are computational constructs and mathematically they don't exist and this is the basis that matlab enforced since day 1. Any numerical object is an at least 2D array including scalars hence transposition flips the dimensions even for a col vector or row vector. That doesn't mean we cannot change it or we need to follow matlab but this is kind of what anybody kinda sorta wouda expect. For some historical reason, on numpy side transposition on 1D arrays did nothing since they have single dimensions. Hence you have to create a 2D vector for transpose from the get go to match the linear algebra intuition. Points that has been discussed so far are about whether we should go further and even intercept this behavior such that 1D transpose gives errors or warnings as opposed to the current behavior of silent no-op. as far as I can tell, we have a consensus that this behavior is here to stay for the foreseeable future.
2- Using transpose to reshape the (complex) array or flip its dimensions This is a usage that has been mentioned above that I don't know much about. I usually go the "reshape() et al." way for this but apparently folks use it to flip dimensions and they don't want the automatically conjugation which is exactly the opposite of a linear algebra oriented user is used to have as an adjoint operator. Therefore points that have been discussed about are whether to inject conjugation into .T behavior of complex arrays or not. If not can we have an extra .H or something that specifically does .conj().T together (or .T.conj() order doesn't matter). The main feel (that I got so far) is that we shouldn't touch the current way and hopefully bring in another attribute.
3- Having a shorthand notation such as .H or .mH etc. If the previous assertion is true then the issue becomes what should be the new name of the attribute and how can it have the nice properties of a transpose such as returning a view etc. However this has been proposed and rejected before e.g., GH-8882 and GH-13797. There is a catch here though, because if the alternative is .conj().T then it doesn't matter whether it copies or not because .conj().T doesn't return a view either and therefore the user receives a new array anyways. Therefore no benefits lost. Since the idea is to have a shorthand notation, it seems to me that this point is artificial in that sense and not necessarily a valid argument for rejection. But from the reluctance of Ralf I feel like there is a historical wear-out on this subject.
4- transpose of 3+D arrays I think we missed the bus on this one for changing the default behavior now and there are glimpses of confirmation of this above in the previous mails. I would suggest discussing this separately.
So if you are not already worn out and not feeling sour about it, I would like to propose the discussion of item 3 opened once again. Because the need is real and we don't need to get choked on the implementation details right away.
Disclaimer: I do applied math so I have a natural bias towards the linalg-y way of doing things. And sorry about that if I did that above, sometimes typing quickly loses the intention.
Best, ilhan
On Wed, Jun 26, 2019 at 4:39 AM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Jun 26, 2019 at 3:56 AM Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> wrote:
Hi Ralf,
On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> wrote:
For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
For the record, this is not an implementation detail. It was the consensus before that `H` is a bad idea unless it returns a view just like `T`: https://github.com/numpy/numpy/issues/8882
Is there more than an issue in which Nathaniel rejecting it mentioning some previous consensus?
Yes, this has been discussed in lots of detail before, also on this list (as Nathaniel mentioned in the issue). I spent 10 minutes to try and find it but that wasn't enough. I do think it's not necessarily my responsibility though to dig up all the history here - that should be on the proposers of a new feature ....
I was part of the discussion of the complex conjugate dtype, but do not recall any consensus beyond a "wish to keep properties simple". Certainly the "property does not do any calculation" rule seems arbitrary; the only strict rule I would apply myself is that the computation should not be able to fail (computationally, out-of-memory does not count; that's like large integer overflow). So, I'd definitely agree with you if we were discussion a property `.I` for matrix inverse (and indeed have said so in related issues). But for .H, not so much. Certainly whoever wrote np.Matrix didn't seem to feel bound by it.
Note that for *matrix* transpose (as opposed to general axis reordering with .tranpose()), I see far less use for what is returned being a writable view. Indeed, for conjugate transpose, I would rather never allow writing back even if it we had the conjugate dtype since one would surely get it wrong (likely, `.conj()` would also return a read-only view, at least by default; perhaps one should even go as far as only allowing `a.view(conjugate-dtype)` as they way to get a writable view).
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
I think `H` would be good to revisit *if* it can be made to return a view. I think a tweak on `T` for >2-D input does not meet the bar for inclusion.
Well, I guess it is obvious I disagree: I think this more than meets the bar for inclusion. To me, this certainly is a much bigger deal that something like oindex or vindex (which I do like).
Honestly, I don't really want to be arguing against this (or even be forced to spend time following along here). My main problem with this proposal right now is that we've had this discussion multiple times, and it was rejected with solid arguments after taking up a lot of time. Restarting that discussion from scratch without considering the history feels wrong. It's like a democracy voting on becoming a dictatorship repeatedly: you can have a "no" vote several times, but if you rerun the vote often enough at some point you'll get a "yes", and then it's a done deal.
I think this requires a serious write-up, as either a NEP or a GitHub issue with a good set of cross-links and addressing all previous arguments.
Indeed, it would seem to me that if a visually more convenient way to do (stacks of) matrix multiplication for numpy is good enough to warrant changing the python syntax, then surely having a visually more convenient standard way to do matrix transpose should not be considered off-limits for ndarray; how often do you see a series matrix manipulations that does not involve both multiplication and transpose?
It certainly doesn't seem to me much of an argument that someone previously decided to use .T for a shortcut for the computer scientist idea of transpose to not allow the mathematical/physical-scientist one - one I would argue is guaranteed to be used much more.
The latter of course just repeats what many others have written above, but since given that you call it a "tweak", perhaps it is worth backing up. For astropy, a quick grep gives:
- 28 uses of the matrix_transpose function I wrote because numpy doesn't have even a simple function for that and the people who wrote the original code used the Matrix class which had the proper .T (but doesn't extend to multiple dimensions; we might still be using it otherwise).
A utility function in scipy.linalg would be a more low-API-impact approach to addressing this.
- 11 uses of .T, all of which seem to be on 2-D arrays and are certainly used as if they were matrix transpose (most are for fitting). Certainly, all of these are bugs lying in waiting if the arrays ever get to be >2-D.
Most linalg is 2-D, that's why numpy.matrix and scipy.sparse matrices are 2-D only. If it's a real worry for those 11 cases, you could just add some comments or tests that prevent introducing bugs.
More importantly, your assumption that >2-D arrays are "stacks of matrices" and that other usage is for "computer scientists" is arguably incorrect. There are many use cases for 3-D and higher-dimensional arrays that are not just "vectorized matrix math". As a physicist, I've done lots of work with 3-D and 4-D grids for everything from quantum physics to engineering problems in semiconductor equipment. NumPy is great for that, and I've never needed >=3-D linalg for any of it (and transposing is useful). So please don't claim the physicial-scientist view for this:)
Cheers, Ralf
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/b5fbd2bac8ddc5fd368b497e43e9d905.jpg?s=120&d=mm&r=g)
A previous discussion of adding a .H operator on the mailing list can be found here: http://numpy-discussion.10968.n7.nabble.com/add-H-attribute-td34474.html that thread refers to an earlier discussion at http://thread.gmane.org/gmane.comp.python.numeric.general/6637 but that link was broken for me at least, but Ralf summarized it as "No strong arguments against and then several more votes in favor." In summary, people seemed to like the idea of .H if it could return a view( or iterator) like .T, and didn't want it to return a copy temporarily until that could happen. A couple of people thought that .H was out of scope for an array library. This discussion also seems to be before the deprecation of np.Matrix had started, so the demand was maybe less evident then? Is what is stopping .H from happening just that no one has stepped up to implement a conjugate view? If so, I am happy to contribute my time to this. I commonly work with large complex arrays and would appreciate saving the copy. On Wed, Jun 26, 2019 at 4:50 AM Andras Deak <deak.andris@gmail.com> wrote:
Dear Ilhan,
Thanks for writing these up. I feel that from a usability standpoint most people would support #3 (.H/.mH), especially considering Marten's very good argument about @. Having to wrap your transposed matrices in function calls half defeats the purpose of being able to write stacked matrix operations elegantly within the ndarray class. The question is of course whether it's feasible from a project management/API design stand point (just to state the obvious). Regarding #1 (1d transpose): I just want to make it clear as someone who switched from MATLAB to python (and couldn't be happier) that we should treat MATLAB's behaviour as more of a cautionary tale rather than design ethos. I paused for exactly 5 seconds the first time I ran into the no-op of 1d transposes, and then I thought "yes, this makes sense", and that was it. To put it differently, I think it's more about MATLAB injecting false assumptions into users than about numpy behaving surprisingly. (On a side note, MATLAB's quirks are one of the reasons that the Spyder IDE, designed to be a MATLAB replacement, has very weird quirks that regularly trip up python users.) Regards,
András
On Wed, Jun 26, 2019 at 9:04 AM Ilhan Polat <ilhanpolat@gmail.com> wrote:
Maybe a bit of a grouping would help, because I am also losing track
here. Let's see if I could manage to get something sensible because, just like Marten mentioned, I am confusing myself even when I am thinking about this
1- Transpose operation on 1D arrays: This is a well-known confusion point for anyone that arrives at
NumPy usage from, say matlab background or any linear algebra based user. Andras mentioned already that this is a subset of NumPy users so we have to be careful about the user assumptions. 1D arrays are computational constructs and mathematically they don't exist and this is the basis that matlab enforced since day 1. Any numerical object is an at least 2D array including scalars hence transposition flips the dimensions even for a col vector or row vector. That doesn't mean we cannot change it or we need to follow matlab but this is kind of what anybody kinda sorta wouda expect. For some historical reason, on numpy side transposition on 1D arrays did nothing since they have single dimensions. Hence you have to create a 2D vector for transpose from the get go to match the linear algebra intuition. Points that has been discussed so far are about whether we should go further and even intercept this behavior such that 1D transpose gives errors or warnings as opposed to the current behavior of silent no-op. as far as I can tell, we have a consensus that this behavior is here to stay for the foreseeable future.
2- Using transpose to reshape the (complex) array or flip its dimensions This is a usage that has been mentioned above that I don't know much
about. I usually go the "reshape() et al." way for this but apparently folks use it to flip dimensions and they don't want the automatically conjugation which is exactly the opposite of a linear algebra oriented user is used to have as an adjoint operator. Therefore points that have been discussed about are whether to inject conjugation into .T behavior of complex arrays or not. If not can we have an extra .H or something that specifically does .conj().T together (or .T.conj() order doesn't matter). The main feel (that I got so far) is that we shouldn't touch the current way and hopefully bring in another attribute.
3- Having a shorthand notation such as .H or .mH etc. If the previous assertion is true then the issue becomes what should
be the new name of the attribute and how can it have the nice properties of a transpose such as returning a view etc. However this has been proposed and rejected before e.g., GH-8882 and GH-13797. There is a catch here though, because if the alternative is .conj().T then it doesn't matter whether it copies or not because .conj().T doesn't return a view either and therefore the user receives a new array anyways. Therefore no benefits lost. Since the idea is to have a shorthand notation, it seems to me that this point is artificial in that sense and not necessarily a valid argument for rejection. But from the reluctance of Ralf I feel like there is a historical wear-out on this subject.
4- transpose of 3+D arrays I think we missed the bus on this one for changing the default
behavior now and there are glimpses of confirmation of this above in the previous mails. I would suggest discussing this separately.
So if you are not already worn out and not feeling sour about it, I
would like to propose the discussion of item 3 opened once again. Because the need is real and we don't need to get choked on the implementation details right away.
Disclaimer: I do applied math so I have a natural bias towards the
linalg-y way of doing things. And sorry about that if I did that above, sometimes typing quickly loses the intention.
Best, ilhan
On Wed, Jun 26, 2019 at 4:39 AM Ralf Gommers <ralf.gommers@gmail.com>
On Wed, Jun 26, 2019 at 3:56 AM Marten van Kerkwijk <
m.h.vankerkwijk@gmail.com> wrote:
Hi Ralf,
On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers <ralf.gommers@gmail.com>
wrote:
On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk <
m.h.vankerkwijk@gmail.com> wrote:
For the names, my suggestion of lower-casing the M in the initial
one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
For the record, this is not an implementation detail. It was the consensus before that `H` is a bad idea unless it returns a view just like `T`: https://github.com/numpy/numpy/issues/8882
Is there more than an issue in which Nathaniel rejecting it mentioning some previous consensus?
Yes, this has been discussed in lots of detail before, also on this
I was part of the discussion of the complex conjugate dtype, but do
not recall any consensus beyond a "wish to keep properties simple". Certainly the "property does not do any calculation" rule seems arbitrary;
Note that for *matrix* transpose (as opposed to general axis
reordering with .tranpose()), I see far less use for what is returned being a writable view. Indeed, for conjugate transpose, I would rather never allow writing back even if it we had the conjugate dtype since one would surely get it wrong (likely, `.conj()` would also return a read-only view, at least by default; perhaps one should even go as far as only allowing `a.view(conjugate-dtype)` as they way to get a writable view).
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would
reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
I think `H` would be good to revisit *if* it can be made to return a view. I think a tweak on `T` for >2-D input does not meet the bar for inclusion.
Well, I guess it is obvious I disagree: I think this more than meets
Honestly, I don't really want to be arguing against this (or even be
forced to spend time following along here). My main problem with this
I think this requires a serious write-up, as either a NEP or a GitHub
issue with a good set of cross-links and addressing all previous arguments.
Indeed, it would seem to me that if a visually more convenient way to
do (stacks of) matrix multiplication for numpy is good enough to warrant changing the python syntax, then surely having a visually more convenient standard way to do matrix transpose should not be considered off-limits for ndarray; how often do you see a series matrix manipulations that does not involve both multiplication and transpose?
It certainly doesn't seem to me much of an argument that someone
The latter of course just repeats what many others have written above,
but since given that you call it a "tweak", perhaps it is worth backing up. For astropy, a quick grep gives:
- 28 uses of the matrix_transpose function I wrote because numpy
doesn't have even a simple function for that and the people who wrote the original code used the Matrix class which had the proper .T (but doesn't extend to multiple dimensions; we might still be using it otherwise).
A utility function in scipy.linalg would be a more low-API-impact approach to addressing this.
- 11 uses of .T, all of which seem to be on 2-D arrays and are
certainly used as if they were matrix transpose (most are for fitting). Certainly, all of these are bugs lying in waiting if the arrays ever get to be >2-D.
Most linalg is 2-D, that's why numpy.matrix and scipy.sparse matrices are 2-D only. If it's a real worry for those 11 cases, you could just add some comments or tests that prevent introducing bugs.
More importantly, your assumption that >2-D arrays are "stacks of matrices" and that other usage is for "computer scientists" is arguably incorrect. There are many use cases for 3-D and higher-dimensional arrays
wrote: list (as Nathaniel mentioned in the issue). I spent 10 minutes to try and find it but that wasn't enough. I do think it's not necessarily my responsibility though to dig up all the history here - that should be on the proposers of a new feature .... the only strict rule I would apply myself is that the computation should not be able to fail (computationally, out-of-memory does not count; that's like large integer overflow). So, I'd definitely agree with you if we were discussion a property `.I` for matrix inverse (and indeed have said so in related issues). But for .H, not so much. Certainly whoever wrote np.Matrix didn't seem to feel bound by it. the bar for inclusion. To me, this certainly is a much bigger deal that something like oindex or vindex (which I do like). proposal right now is that we've had this discussion multiple times, and it was rejected with solid arguments after taking up a lot of time. Restarting that discussion from scratch without considering the history feels wrong. It's like a democracy voting on becoming a dictatorship repeatedly: you can have a "no" vote several times, but if you rerun the vote often enough at some point you'll get a "yes", and then it's a done deal. previously decided to use .T for a shortcut for the computer scientist idea of transpose to not allow the mathematical/physical-scientist one - one I would argue is guaranteed to be used much more. that are not just "vectorized matrix math". As a physicist, I've done lots of work with 3-D and 4-D grids for everything from quantum physics to engineering problems in semiconductor equipment. NumPy is great for that, and I've never needed >=3-D linalg for any of it (and transposing is useful). So please don't claim the physicial-scientist view for this:)
Cheers, Ralf
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, Jun 26, 2019 at 6:32 PM Cameron Blocker <cameronjblocker@gmail.com> wrote:
A previous discussion of adding a .H operator on the mailing list can be found here: http://numpy-discussion.10968.n7.nabble.com/add-H-attribute-td34474.html that thread refers to an earlier discussion at http://thread.gmane.org/gmane.comp.python.numeric.general/6637 but that link was broken for me at least, but Ralf summarized it as "No strong arguments against and then several more votes in favor."
Thanks for digging up that history! Summary is that it was indeed about copy/view. Travis, Dag Sverre and Nathaniel all argued against .H with copy behavior.
In summary, people seemed to like the idea of .H if it could return a view( or iterator) like .T, and didn't want it to return a copy temporarily until that could happen. A couple of people thought that .H was out of scope for an array library.
This discussion also seems to be before the deprecation of np.Matrix had started, so the demand was maybe less evident then?
Probably not, that thread is from 5 years after it was clear that np.matrix should not be used anymore.
Is what is stopping .H from happening just that no one has stepped up to implement a conjugate view?
I think so, yes. If so, I am happy to contribute my time to this. I commonly work with large
complex arrays and would appreciate saving the copy.
Thanks, that would be really welcome. If that doesn't work out, the alternative proposed by Eric yesterday to write a better new matrix object ( https://github.com/numpy/numpy/issues/13835) is probably the way to go. Or it may be preferred anyway / as well, because you can add more niceties like row/column vectors and enforcing >= 2-D while still not causing problems like np.matrix did by changing semantics of operators or indexing.
On Wed, Jun 26, 2019 at 4:50 AM Andras Deak <deak.andris@gmail.com> wrote:
Dear Ilhan,
Thanks for writing these up. I feel that from a usability standpoint most people would support #3 (.H/.mH), especially considering Marten's very good argument about @.
The main motivation for the @ PEP was actually to be able to get rid of objects like np.matrix and scipy.sparse matrices that redefine the meaning of the * operator. Quote: "This PEP proposes the minimum effective change to Python syntax that will allow us to drain this swamp [meaning np.matrix & co]." Notably, the @ PEP was written by Nathaniel, who was opposed to a copying .H. Cheers, Ralf Having to wrap your transposed matrices in function calls half defeats
the purpose of being able to write stacked matrix operations elegantly within the ndarray class. The question is of course whether it's feasible from a project management/API design stand point (just to state the obvious). Regarding #1 (1d transpose): I just want to make it clear as someone who switched from MATLAB to python (and couldn't be happier) that we should treat MATLAB's behaviour as more of a cautionary tale rather than design ethos. I paused for exactly 5 seconds the first time I ran into the no-op of 1d transposes, and then I thought "yes, this makes sense", and that was it. To put it differently, I think it's more about MATLAB injecting false assumptions into users than about numpy behaving surprisingly. (On a side note, MATLAB's quirks are one of the reasons that the Spyder IDE, designed to be a MATLAB replacement, has very weird quirks that regularly trip up python users.) Regards,
András
On Wed, Jun 26, 2019 at 9:04 AM Ilhan Polat <ilhanpolat@gmail.com> wrote:
Maybe a bit of a grouping would help, because I am also losing track
here. Let's see if I could manage to get something sensible because, just like Marten mentioned, I am confusing myself even when I am thinking about this
1- Transpose operation on 1D arrays: This is a well-known confusion point for anyone that arrives at
NumPy usage from, say matlab background or any linear algebra based user. Andras mentioned already that this is a subset of NumPy users so we have to be careful about the user assumptions. 1D arrays are computational constructs and mathematically they don't exist and this is the basis that matlab enforced since day 1. Any numerical object is an at least 2D array including scalars hence transposition flips the dimensions even for a col vector or row vector. That doesn't mean we cannot change it or we need to follow matlab but this is kind of what anybody kinda sorta wouda expect. For some historical reason, on numpy side transposition on 1D arrays did nothing since they have single dimensions. Hence you have to create a 2D vector for transpose from the get go to match the linear algebra intuition. Points that has been discussed so far are about whether we should go further and even intercept this behavior such that 1D transpose gives errors or warnings as opposed to the current behavior of silent no-op. as far as I can tell, we have a consensus that this behavior is here to stay for the foreseeable future.
2- Using transpose to reshape the (complex) array or flip its dimensions This is a usage that has been mentioned above that I don't know
much about. I usually go the "reshape() et al." way for this but apparently folks use it to flip dimensions and they don't want the automatically conjugation which is exactly the opposite of a linear algebra oriented user is used to have as an adjoint operator. Therefore points that have been discussed about are whether to inject conjugation into .T behavior of complex arrays or not. If not can we have an extra .H or something that specifically does .conj().T together (or .T.conj() order doesn't matter). The main feel (that I got so far) is that we shouldn't touch the current way and hopefully bring in another attribute.
3- Having a shorthand notation such as .H or .mH etc. If the previous assertion is true then the issue becomes what
should be the new name of the attribute and how can it have the nice properties of a transpose such as returning a view etc. However this has been proposed and rejected before e.g., GH-8882 and GH-13797. There is a catch here though, because if the alternative is .conj().T then it doesn't matter whether it copies or not because .conj().T doesn't return a view either and therefore the user receives a new array anyways. Therefore no benefits lost. Since the idea is to have a shorthand notation, it seems to me that this point is artificial in that sense and not necessarily a valid argument for rejection. But from the reluctance of Ralf I feel like there is a historical wear-out on this subject.
4- transpose of 3+D arrays I think we missed the bus on this one for changing the default
behavior now and there are glimpses of confirmation of this above in the previous mails. I would suggest discussing this separately.
So if you are not already worn out and not feeling sour about it, I
would like to propose the discussion of item 3 opened once again. Because the need is real and we don't need to get choked on the implementation details right away.
Disclaimer: I do applied math so I have a natural bias towards the
linalg-y way of doing things. And sorry about that if I did that above, sometimes typing quickly loses the intention.
Best, ilhan
On Wed, Jun 26, 2019 at 4:39 AM Ralf Gommers <ralf.gommers@gmail.com>
On Wed, Jun 26, 2019 at 3:56 AM Marten van Kerkwijk <
m.h.vankerkwijk@gmail.com> wrote:
Hi Ralf,
On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers <ralf.gommers@gmail.com>
wrote:
On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk <
m.h.vankerkwijk@gmail.com> wrote:
> > > For the names, my suggestion of lower-casing the M in the initial one, i.e., `.mT` and `.mH`, so far seemed most supported (and I think we should discuss *assuming* those would eventually involve not copying data; let's not worry about implementation details).
For the record, this is not an implementation detail. It was the consensus before that `H` is a bad idea unless it returns a view just like `T`: https://github.com/numpy/numpy/issues/8882
Is there more than an issue in which Nathaniel rejecting it mentioning some previous consensus?
Yes, this has been discussed in lots of detail before, also on this
I was part of the discussion of the complex conjugate dtype, but do
not recall any consensus beyond a "wish to keep properties simple". Certainly the "property does not do any calculation" rule seems arbitrary;
Note that for *matrix* transpose (as opposed to general axis
reordering with .tranpose()), I see far less use for what is returned being a writable view. Indeed, for conjugate transpose, I would rather never allow writing back even if it we had the conjugate dtype since one would surely get it wrong (likely, `.conj()` would also return a read-only view, at least by default; perhaps one should even go as far as only allowing `a.view(conjugate-dtype)` as they way to get a writable view).
So, specific items to confirm:
1) Is this a worthy addition? (certainly, their existence would
reduce confusion about `.T`... so far, my sense is tentative yes)
2) Are `.mT` and `.mH` indeed the consensus? [1]
I think `H` would be good to revisit *if* it can be made to return a view. I think a tweak on `T` for >2-D input does not meet the bar for inclusion.
Well, I guess it is obvious I disagree: I think this more than meets
Honestly, I don't really want to be arguing against this (or even be
forced to spend time following along here). My main problem with this
I think this requires a serious write-up, as either a NEP or a GitHub
issue with a good set of cross-links and addressing all previous arguments.
Indeed, it would seem to me that if a visually more convenient way to
do (stacks of) matrix multiplication for numpy is good enough to warrant changing the python syntax, then surely having a visually more convenient standard way to do matrix transpose should not be considered off-limits for ndarray; how often do you see a series matrix manipulations that does not involve both multiplication and transpose?
It certainly doesn't seem to me much of an argument that someone
The latter of course just repeats what many others have written
above, but since given that you call it a "tweak", perhaps it is worth backing up. For astropy, a quick grep gives:
- 28 uses of the matrix_transpose function I wrote because numpy
doesn't have even a simple function for that and the people who wrote the original code used the Matrix class which had the proper .T (but doesn't extend to multiple dimensions; we might still be using it otherwise).
A utility function in scipy.linalg would be a more low-API-impact approach to addressing this.
- 11 uses of .T, all of which seem to be on 2-D arrays and are
certainly used as if they were matrix transpose (most are for fitting). Certainly, all of these are bugs lying in waiting if the arrays ever get to be >2-D.
Most linalg is 2-D, that's why numpy.matrix and scipy.sparse matrices are 2-D only. If it's a real worry for those 11 cases, you could just add some comments or tests that prevent introducing bugs.
More importantly, your assumption that >2-D arrays are "stacks of matrices" and that other usage is for "computer scientists" is arguably incorrect. There are many use cases for 3-D and higher-dimensional arrays
wrote: list (as Nathaniel mentioned in the issue). I spent 10 minutes to try and find it but that wasn't enough. I do think it's not necessarily my responsibility though to dig up all the history here - that should be on the proposers of a new feature .... the only strict rule I would apply myself is that the computation should not be able to fail (computationally, out-of-memory does not count; that's like large integer overflow). So, I'd definitely agree with you if we were discussion a property `.I` for matrix inverse (and indeed have said so in related issues). But for .H, not so much. Certainly whoever wrote np.Matrix didn't seem to feel bound by it. the bar for inclusion. To me, this certainly is a much bigger deal that something like oindex or vindex (which I do like). proposal right now is that we've had this discussion multiple times, and it was rejected with solid arguments after taking up a lot of time. Restarting that discussion from scratch without considering the history feels wrong. It's like a democracy voting on becoming a dictatorship repeatedly: you can have a "no" vote several times, but if you rerun the vote often enough at some point you'll get a "yes", and then it's a done deal. previously decided to use .T for a shortcut for the computer scientist idea of transpose to not allow the mathematical/physical-scientist one - one I would argue is guaranteed to be used much more. that are not just "vectorized matrix math". As a physicist, I've done lots of work with 3-D and 4-D grids for everything from quantum physics to engineering problems in semiconductor equipment. NumPy is great for that, and I've never needed >=3-D linalg for any of it (and transposing is useful). So please don't claim the physicial-scientist view for this:)
Cheers, Ralf
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
![](https://secure.gravatar.com/avatar/851ff10fbb1363b7d6111ac60194cc1c.jpg?s=120&d=mm&r=g)
The main motivation for the @ PEP was actually to be able to get rid of objects like np.matrix and scipy.sparse matrices that redefine the meaning of the * operator. Quote: "This PEP proposes the minimum effective change to Python syntax that will allow us to drain this swamp [meaning np.matrix & co]."
Notably, the @ PEP was written by Nathaniel, who was opposed to a copying .H.
I should note that my comment invoking the history of @ was about the regular transpose, .mT. The executive summary of the PEP includes the following relevant sentence: """ Currently, most numerical Python code uses * for elementwise multiplication, and function/method syntax for matrix multiplication; however, this leads to ugly and unreadable code in common circumstances. """ Exactly the same holds for matrix transpose, and indeed for many matrix expressions the gain in readability is lost without a clean option to do the transpose. -- Marten
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, Jun 26, 2019 at 10:24 PM Marten van Kerkwijk < m.h.vankerkwijk@gmail.com> wrote:
The main motivation for the @ PEP was actually to be able to get rid of
objects like np.matrix and scipy.sparse matrices that redefine the meaning of the * operator. Quote: "This PEP proposes the minimum effective change to Python syntax that will allow us to drain this swamp [meaning np.matrix & co]."
Notably, the @ PEP was written by Nathaniel, who was opposed to a copying .H.
I should note that my comment invoking the history of @ was about the regular transpose, .mT. The executive summary of the PEP includes the following relevant sentence: """ Currently, most numerical Python code uses * for elementwise multiplication, and function/method syntax for matrix multiplication; however, this leads to ugly and unreadable code in common circumstances. """
Exactly the same holds for matrix transpose, and indeed for many matrix expressions the gain in readability is lost without a clean option to do the transpose.
Yes, but that's not at all equivalent. The point for @ was that you cannot just create your own operator in Python. Therefore there really is no alternative to a builtin operator. For methods however, there's nothing stopping you from building a well-designed matrix class. You can then add all the new properties that you want. Ralf
participants (16)
-
Alan Isaac
-
Andras Deak
-
Cameron Blocker
-
Charles R Harris
-
Eric Wieser
-
Hameer Abbasi
-
Ilhan Polat
-
Juan Nunez-Iglesias
-
Kirill Balunov
-
Marten van Kerkwijk
-
Matthew Brett
-
Ralf Gommers
-
Sebastian Berg
-
Stephan Hoyer
-
Stewart Clelland
-
Todd