With numpy arrays, I miss being able to spell a.conj().T as a.H, as one can with numpy matrices. Is adding this attribute to arrays ever under consideration? Thanks, Alan Isaac
On Sun, Jul 7, 2013 at 9:28 AM, Alan G Isaac <alan.isaac@gmail.com> wrote:
With numpy arrays, I miss being able to spell a.conj().T as a.H, as one can with numpy matrices.
Is adding this attribute to arrays ever under consideration?
There was a long thread about this back around 1.1 or so, long time ago in any case. IIRC correctly, Travis was opposed. I think part of the problem was that arr.T is a view, but arr.H would not be. Probably it could be be made to return an iterator that performed the conjugation, or we could simply return a new array. I'm not opposed myself, but I'd have to review the old discussion to see if there was good reason not to have it in the first place. I think the original discussion of an abs method took place about the same time. Chuck
On Sun, Jul 7, 2013 at 9:28 AM, Alan G Isaac <alan.isaac@gmail.com <mailto:alan.isaac@gmail.com>> wrote: I miss being able to spell a.conj().T as a.H, as one can with numpy matrices.
On 7/7/2013 4:49 PM, Charles R Harris wrote:
There was a long thread about this back around 1.1 or so, long time ago in any case. IIRC correctly, Travis was opposed. I think part of the problem was that arr.T is a view, but arr.H would not be. Probably it could be be made to return an iterator that performed the conjugation, or we could simply return a new array. I'm not opposed myself, but I'd have to review the old discussion to see if there was good reason not to have it in the first place. I think the original discussion of an abs method took place about the same time.
If not being a view is determinative, could a .ct() method be considered? Or would the objection apply there too? Thanks, Alan
On 13 Jul 2013 16:30, "Alan G Isaac" <alan.isaac@gmail.com> wrote:
On Sun, Jul 7, 2013 at 9:28 AM, Alan G Isaac <alan.isaac@gmail.com<mailto:
alan.isaac@gmail.com>> wrote:
I miss being able to spell a.conj().T as a.H, as one can with numpy matrices.
On 7/7/2013 4:49 PM, Charles R Harris wrote:
There was a long thread about this back around 1.1 or so, long time ago in any case. IIRC correctly, Travis was opposed. I think part of the problem was that arr.T is a view, but arr.H would not be. Probably it could be be made to return an iterator that performed the conjugation, or we could simply return a new array. I'm not opposed myself, but I'd have to review the old discussion to see if there was good reason not to have it in the first place. I think the original discussion of an abs method took place about the same time.
If not being a view is determinative, could a .ct() method be considered? Or would the objection apply there too?
Why not just write def H(a): return a.conj().T in your local namespace? The resulting code will be even more concise than if we had a .ct() method. ndarray has way too many attributes already IMHO (though I realize this may be a minority view). -n
On 7/13/2013 1:46 PM, Nathaniel Smith wrote:
Why not just write
def H(a): return a.conj().T
in your local namespace?
First of all, I am sympathetic to being conservative about the addition of attributes! But the question about adding a.H about the possibility of improving - speed (relative to adding a function of my own) - readability (including error-free readability of others' code) - consistency (across code bases and objects) - competitiveness (with other array languages) - convenience (including key strokes) I agree that there are alternatives for the last of these. Alan
On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith <njs@pobox.com> wrote:
Why not just write
def H(a): return a.conj().T
It's hard to convince students that this is the Best Way of doing things in NumPy. Why, they ask, can you do it using a' in MATLAB, then? I've tripped over this one before, since it's not the kind of thing you imagine would be unimplemented, and then spend some time trying to find it. Stéfan
On Thu, Jul 18, 2013 at 4:18 PM, Stéfan van der Walt <stefan@sun.ac.za>wrote:
On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith <njs@pobox.com> wrote:
Why not just write
def H(a): return a.conj().T
It's hard to convince students that this is the Best Way of doing things in NumPy. Why, they ask, can you do it using a' in MATLAB, then?
I've tripped over this one before, since it's not the kind of thing you imagine would be unimplemented, and then spend some time trying to find it.
+1 for adding a H attribute. Here's the end of the old discussion Chuck referred to: http://thread.gmane.org/gmane.comp.python.numeric.general/6637. No strong arguments against and then several more votes in favor. Ralf
On Sat, Jul 20, 2013 at 12:36 PM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Thu, Jul 18, 2013 at 4:18 PM, Stéfan van der Walt <stefan@sun.ac.za> wrote:
On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith <njs@pobox.com> wrote:
Why not just write
def H(a): return a.conj().T
It's hard to convince students that this is the Best Way of doing things in NumPy. Why, they ask, can you do it using a' in MATLAB, then?
I've tripped over this one before, since it's not the kind of thing you imagine would be unimplemented, and then spend some time trying to find it.
+1 for adding a H attribute.
Here's the end of the old discussion Chuck referred to: >http://thread.gmane.org/gmane.comp.python.numeric.general/6637. No strong arguments against and then several more votes in favor.
Are there other precedents where an attribute would involve data-copying ? I'm thinking that numpy generally does better than matlab by being more explicit about it's memory usage... (But, I'm no mathematician and I could see it beeing much of a convenience to have .H ) My two cents, Sebastian Haase
On Sat, Jul 20, 2013 at 3:30 PM, Sebastian Haase <seb.haase@gmail.com>wrote:
On Sat, Jul 20, 2013 at 12:36 PM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Thu, Jul 18, 2013 at 4:18 PM, Stéfan van der Walt <stefan@sun.ac.za>
wrote:
On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith <njs@pobox.com> wrote:
Why not just write
def H(a): return a.conj().T
It's hard to convince students that this is the Best Way of doing things in NumPy. Why, they ask, can you do it using a' in MATLAB, then?
I've tripped over this one before, since it's not the kind of thing you imagine would be unimplemented, and then spend some time trying to find it.
+1 for adding a H attribute.
Here's the end of the old discussion Chuck referred to: > http://thread.gmane.org/gmane.comp.python.numeric.general/6637. No strong arguments against and then several more votes in favor.
Are there other precedents where an attribute would involve data-copying ?
np.matrix.H for example. If you meant ndarray attributes and not attributes of numpy objects, I guess no. I don't think that matters much compared to having an intuitive and consistent API though. Ralf
I'm thinking that numpy generally does better than matlab by being more explicit about it's memory usage... (But, I'm no mathematician and I could see it beeing much of a convenience to have .H )
My two cents, Sebastian Haase _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sat, 20 Jul 2013 15:30:48 +0200, Sebastian Haase wrote:
Are there other precedents where an attribute would involve data-copying ? I'm thinking that numpy generally does better than matlab by being more explicit about it's memory usage... (But, I'm no mathematician and I could see it beeing much of a convenience to have .H )
Hopefully we'll eventually have lazily evaluated arrays so that we can do things like views of ufuncs on data. Unfortunately, this is not doable with the current ndarray, since its structure is tied to a pointer and strides. Stéfan
On Thu, Jul 18, 2013 at 3:18 PM, Stéfan van der Walt <stefan@sun.ac.za> wrote:
On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith <njs@pobox.com> wrote:
Why not just write
def H(a): return a.conj().T
It's hard to convince students that this is the Best Way of doing things in NumPy. Why, they ask, can you do it using a' in MATLAB, then?
I guess I'd try to treat it as a teachable moment... the answer points to a basic difference in numpy versus MATLAB. Numpy operates at a slightly lower level of abstraction. In MATLAB you're encouraged to think of arrays as just mathematical matrices and let MATLAB worry about how to actually represent those inside the computer. Sometimes it does a good job, sometimes not. In numpy you need to think of arrays as structured representations of a chunk of memory. There disadvantages to this -- e.g. keeping track of which arrays return view and which return copies can be tricky -- but it also gives a lot of power: views are awesome, you get better interoperability with C libraries/Cython, better ability to predict which operations are expensive or cheap, more opportunities to use clever tricks when you need to, etc. And one example of this is that transpose and conjugate transpose really are very different at this level, because one is a cheap stride manipulation that returns a view, and the other is a (relatively) expensive data copying operation. The convention in Python is that attribute access is supposed to be cheap, while function calls serve as a warning that something expensive might be going on. So in short: MATLAB is optimized for doing linear algebra and not thinking too hard about programming; numpy is optimized for writing good programs. Having .T but not .H is an example of this split. Also it's a good opportunity to demonstrate the value of making little helper functions, which is a powerful technique that students generally need to be taught ;-). -n
What if .H is not an attribute, but a method? Is this enough of a warning about copying? Eugene -----Original Message----- From: numpy-discussion-bounces@scipy.org [mailto:numpy-discussion-bounces@scipy.org] On Behalf Of Nathaniel Smith Sent: Monday, July 22, 2013 3:11 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] add .H attribute? On Thu, Jul 18, 2013 at 3:18 PM, Stéfan van der Walt <stefan@sun.ac.za> wrote:
On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith <njs@pobox.com> wrote:
Why not just write
def H(a): return a.conj().T
It's hard to convince students that this is the Best Way of doing things in NumPy. Why, they ask, can you do it using a' in MATLAB, then?
I guess I'd try to treat it as a teachable moment... the answer points to a basic difference in numpy versus MATLAB. Numpy operates at a slightly lower level of abstraction. In MATLAB you're encouraged to think of arrays as just mathematical matrices and let MATLAB worry about how to actually represent those inside the computer. Sometimes it does a good job, sometimes not. In numpy you need to think of arrays as structured representations of a chunk of memory. There disadvantages to this -- e.g. keeping track of which arrays return view and which return copies can be tricky -- but it also gives a lot of power: views are awesome, you get better interoperability with C libraries/Cython, better ability to predict which operations are expensive or cheap, more opportunities to use clever tricks when you need to, etc. And one example of this is that transpose and conjugate transpose really are very different at this level, because one is a cheap stride manipulation that returns a view, and the other is a (relatively) expensive data copying operation. The convention in Python is that attribute access is supposed to be cheap, while function calls serve as a warning that something expensive might be going on. So in short: MATLAB is optimized for doing linear algebra and not thinking too hard about programming; numpy is optimized for writing good programs. Having .T but not .H is an example of this split. Also it's a good opportunity to demonstrate the value of making little helper functions, which is a powerful technique that students generally need to be taught ;-). -n _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion This email is confidential and subject to important disclaimers and conditions including on offers for the purchase or sale of securities, accuracy and completeness of information, viruses, confidentiality, legal privilege, and legal entity disclaimers, available at http://www.jpmorgan.com/pages/disclosures/email.
On the other hand, the most salient quality an unavoidable copy is that it is unavoidable. For people for whom using Hermitian conjugates is common, it's not like they won't do it just because they can't avoid a copy that can't be avoided. Given that if a problem dictates a Hermitian conjugate be taken, then it will be taken, then: a.H is closer to the mathematical notation, eases migration for matlab users, and does not require everyone to reinvent their own little version of the same function over and over. All of that seems more compelling that this particular arbitrary convention, personally. Bryan On Jul 22, 2013, at 3:10 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Jul 18, 2013 at 3:18 PM, Stéfan van der Walt <stefan@sun.ac.za> wrote:
On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith <njs@pobox.com> wrote:
Why not just write
def H(a): return a.conj().T
It's hard to convince students that this is the Best Way of doing things in NumPy. Why, they ask, can you do it using a' in MATLAB, then?
I guess I'd try to treat it as a teachable moment... the answer points to a basic difference in numpy versus MATLAB. Numpy operates at a slightly lower level of abstraction. In MATLAB you're encouraged to think of arrays as just mathematical matrices and let MATLAB worry about how to actually represent those inside the computer. Sometimes it does a good job, sometimes not. In numpy you need to think of arrays as structured representations of a chunk of memory. There disadvantages to this -- e.g. keeping track of which arrays return view and which return copies can be tricky -- but it also gives a lot of power: views are awesome, you get better interoperability with C libraries/Cython, better ability to predict which operations are expensive or cheap, more opportunities to use clever tricks when you need to, etc.
And one example of this is that transpose and conjugate transpose really are very different at this level, because one is a cheap stride manipulation that returns a view, and the other is a (relatively) expensive data copying operation. The convention in Python is that attribute access is supposed to be cheap, while function calls serve as a warning that something expensive might be going on. So in short: MATLAB is optimized for doing linear algebra and not thinking too hard about programming; numpy is optimized for writing good programs. Having .T but not .H is an example of this split.
Also it's a good opportunity to demonstrate the value of making little helper functions, which is a powerful technique that students generally need to be taught ;-).
-n _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 7/22/2013 3:10 PM, Nathaniel Smith wrote:
Having .T but not .H is an example of this split.
Hate to do this but ... Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. How much is the split a rule or "just" a convention, and is there enough practicality here to beat the purity of the split? Note: this is not a rhetorical question. However: if you propose A.conjugate().transpose() as providing a teachable moment about why to use NumPy instead of A' in Matlab, I conclude you do not ever teach most of my students. The real world matters. Since practicality beats purity, we do have A.conj().T, which is better but still not as readable as A.H would be. Or even A.H(), should that satisfy your objections (and still provide a teachable moment). Alan
Alan G Isaac <alan.isaac <at> gmail.com> writes:
On 7/22/2013 3:10 PM, Nathaniel Smith wrote:
Having .T but not .H is an example of this split.
Hate to do this but ...
Readability counts.
+10! A.conjugate().transpose() is unspeakably horrible IMHO. Since there's no way to avoid a copy you gain nothing by not providing the convenience function. It should be fairly obvious that an operation which changes the values of an array (and doesn't work in-place) necessarily takes a copy. I think it's more than sufficient to simply document the fact that A.H will return a copy. A user coming from Matlab probably doesn't care that it takes a copy but you'd be hard pressed to convince them there's any benefit of writing A.conjugate().transpose() over exactly what it looks like in textbooks - A.H Regards, Dave
On Tue, Jul 23, 2013 at 12:35 AM, Dave Hirschfeld <dave.hirschfeld@gmail.com> wrote:
Alan G Isaac <alan.isaac <at> gmail.com> writes:
On 7/22/2013 3:10 PM, Nathaniel Smith wrote:
Having .T but not .H is an example of this split.
Hate to do this but ...
Readability counts.
+10!
A.conjugate().transpose() is unspeakably horrible IMHO. Since there's no way to avoid a copy you gain nothing by not providing the convenience function.
Silly suggestion: why not just make .H a callable? a.H() is nearly as short/handy as .H, it fits easily into the mnemonic pattern suggested by .T, yet the extra () are indicative that something potentially big/expensive is happening... Cheers, f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail
On 07/23/2013 09:35 AM, Dave Hirschfeld wrote:
Alan G Isaac <alan.isaac <at> gmail.com> writes:
On 7/22/2013 3:10 PM, Nathaniel Smith wrote:
Having .T but not .H is an example of this split.
Hate to do this but ...
Readability counts.
+10!
A.conjugate().transpose() is unspeakably horrible IMHO. Since there's no way to avoid a copy you gain nothing by not providing the convenience function.
It should be fairly obvious that an operation which changes the values of an array (and doesn't work in-place) necessarily takes a copy. I think it's more than sufficient to simply document the fact that A.H will return a copy.
I don't think this is obvious at all. In fact, I'd fully expect A.H to return a view that conjugates the values on the fly as they are read/written (just the same way the array is "transposed on the fly" or "sliced on the fly" with other views). There's lots of uses for A.H to be a conjugating-view, e.g., np.dot(A.H, A) can be done on-the-fly by BLAS at no extra cost, and so on. These are currently not possible with pure NumPy without a copy, which is a pretty big defect IMO (and one reason I'd call BLAS myself using Cython rather than use np.dot...) So -1 on using A.H for anything but a proper view, and "A.conjt()" or something similar for a method that does a copy. Dag Sverre
On 07/23/2013 10:35 AM, Dag Sverre Seljebotn wrote:
On 07/23/2013 09:35 AM, Dave Hirschfeld wrote:
Alan G Isaac <alan.isaac <at> gmail.com> writes:
On 7/22/2013 3:10 PM, Nathaniel Smith wrote:
Having .T but not .H is an example of this split.
Hate to do this but ...
Readability counts.
+10!
A.conjugate().transpose() is unspeakably horrible IMHO. Since there's no way to avoid a copy you gain nothing by not providing the convenience function.
It should be fairly obvious that an operation which changes the values of an array (and doesn't work in-place) necessarily takes a copy. I think it's more than sufficient to simply document the fact that A.H will return a copy.
I don't think this is obvious at all. In fact, I'd fully expect A.H to return a view that conjugates the values on the fly as they are read/written (just the same way the array is "transposed on the fly" or "sliced on the fly" with other views).
There's lots of uses for A.H to be a conjugating-view, e.g., np.dot(A.H, A) can be done on-the-fly by BLAS at no extra cost, and so on. These are currently not possible with pure NumPy without a copy, which is a pretty big defect IMO (and one reason I'd call BLAS myself using Cython rather than use np.dot...)
So -1 on using A.H for anything but a proper view, and "A.conjt()" or something similar for a method that does a copy.
Sorry: I'm +1 on another name for a method that does a copy. Which can eventually be made redundant with A.H.copy(), if somebody ever takes on the work to make that happen...but at least I think the path to that should be kept open. Dag Sverre
On 7/23/2013 4:36 AM, Dag Sverre Seljebotn wrote:
I'm +1 on another name for a method that does a copy. Which can eventually be made redundant with A.H.copy(), if somebody ever takes on the work to make that happen...but at least I think the path to that should be kept open.
If that is the decision, I would suggest A.ct(). But, it this really necessary? An obvious path is to introduce A.H now, document that it makes a copy, and document that it may eventually produce an iterative view. Think how much nicer things would be evolving if diagonal had been implemented as an attribute with documentation that it would eventually be a writable view. Isn't there some analogy with this situation? Alan
On Tue, Jul 23, 2013 at 10:35 AM, Dag Sverre Seljebotn <d.s.seljebotn@astro.uio.no> wrote:
So -1 on using A.H for anything but a proper view, and "A.conjt()" or something similar for a method that does a copy.
"A.T.conj()" is just as clear, so my feeling is that we should either add A.H / A.H() or leave it be. Stéfan
23.07.2013 15:42, Stéfan van der Walt kirjoitti:
On Tue, Jul 23, 2013 at 10:35 AM, Dag Sverre Seljebotn <d.s.seljebotn@astro.uio.no> wrote:
So -1 on using A.H for anything but a proper view, and "A.conjt()" or something similar for a method that does a copy.
"A.T.conj()" is just as clear, so my feeling is that we should either add A.H / A.H() or leave it be.
The .H property has been implemented in Numpy matrices and Scipy's sparse matrices for many years, and AFAIK the view issue apparently hasn't caused much confusion. I think having it return an iterator (similarly to .flat which I think is rarely used) that is not compatible with ndarrays would be quite confusing. Implementing a full complex-conjugating ndarray view for this purpose on the other hand seems quite a large hassle, for somewhat dubious gains. If it is implemented as returning a copy, it can be documented in a way that leaves leeway for changing the implementation to a view later on. -- Pauli Virtanen
On 7/23/2013 9:09 AM, Pauli Virtanen wrote:
.flat which I think is rarely used
Until ``diagonal`` completes its transition, use of ``flat`` seems the best way to reset the diagonal on an array. Am I wrong? I use it that way all the time. Alan Isaac
On Tue, Jul 23, 2013 at 3:39 PM, Alan G Isaac <alan.isaac@gmail.com> wrote:
On 7/23/2013 9:09 AM, Pauli Virtanen wrote:
.flat which I think is rarely used
Until ``diagonal`` completes its transition, use of ``flat`` seems the best way to reset the diagonal on an array. Am I wrong? I use it that way all the time.
I usually write x[np.diag_indices_from(x)] = [1,2,3] Stéfan
On Tue, Jul 23, 2013 at 10:11 AM, Stéfan van der Walt <stefan@sun.ac.za>wrote:
On Tue, Jul 23, 2013 at 3:39 PM, Alan G Isaac <alan.isaac@gmail.com> wrote:
On 7/23/2013 9:09 AM, Pauli Virtanen wrote:
.flat which I think is rarely used
Don't assume .flat is not commonly used. A common idiom in matlab is "a[:]" to flatten an array. When porting code over from matlab, it is typical to replace that with either "a.flat" or "a.flatten()", depending on whether an iterator or an array is needed. Cheers! Ben Root
23.07.2013 17:34, Benjamin Root kirjoitti: [clip]
Don't assume .flat is not commonly used. A common idiom in matlab is "a[:]" to flatten an array. When porting code over from matlab, it is typical to replace that with either "a.flat" or "a.flatten()", depending on whether an iterator or an array is needed.
It is much more rarely used than `ravel()` and `flatten()`, as can be verified by grepping e.g. the matplotlib source code. -- Pauli Virtanen
On Tue, Jul 23, 2013 at 10:46 AM, Pauli Virtanen <pav@iki.fi> wrote:
23.07.2013 17:34, Benjamin Root kirjoitti: [clip]
Don't assume .flat is not commonly used. A common idiom in matlab is "a[:]" to flatten an array. When porting code over from matlab, it is typical to replace that with either "a.flat" or "a.flatten()", depending on whether an iterator or an array is needed.
It is much more rarely used than `ravel()` and `flatten()`, as can be verified by grepping e.g. the matplotlib source code.
The matplotlib source code is not a port from Matlab, so grepping that wouldn't prove anything. Meanwhile, the "NumPy for Matlab users" page notes that a.flatten() makes a copy. A newbie to NumPy would then (correctly) look up the documentation for a.flatten() and see in the "See Also" section that "a.flat" is just an iterator rather than a copy, and would often use that to avoid the copy. Cheers! Ben Root
On Tue, Jul 23, 2013 at 8:46 AM, Pauli Virtanen <pav@iki.fi> wrote:
23.07.2013 17:34, Benjamin Root kirjoitti: [clip]
Don't assume .flat is not commonly used. A common idiom in matlab is "a[:]" to flatten an array. When porting code over from matlab, it is typical to replace that with either "a.flat" or "a.flatten()", depending on whether an iterator or an array is needed.
It is much more rarely used than `ravel()` and `flatten()`, as can be verified by grepping e.g. the matplotlib source code.
Grepping in my code, I find a lot of things like dfx = van.dot((ax2 - ax1).flat) IIRC, the flat version was faster than other methods. Chuck
On Tue, 2013-07-23 at 10:22 -0600, Charles R Harris wrote:
On Tue, Jul 23, 2013 at 8:46 AM, Pauli Virtanen <pav@iki.fi> wrote: 23.07.2013 17:34, Benjamin Root kirjoitti: [clip] > Don't assume .flat is not commonly used. A common idiom in matlab is > "a[:]" to flatten an array. When porting code over from matlab, it is > typical to replace that with either "a.flat" or "a.flatten()", depending > on whether an iterator or an array is needed.
It is much more rarely used than `ravel()` and `flatten()`, as can be verified by grepping e.g. the matplotlib source code.
Grepping in my code, I find a lot of things like
dfx = van.dot((ax2 - ax1).flat)
IIRC, the flat version was faster than other methods.
Faster then flatten certainly (since flatten forces a copy), I would be quite surprised if it is faster then ravel, and since dot can't make use of the iterator, that seems more natural to me. - Sebastian
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
23.07.2013 19:22, Charles R Harris kirjoitti: [clip]
Grepping in my code, I find a lot of things like
dfx = van.dot((ax2 - ax1).flat)
IIRC, the flat version was faster than other methods.
That goes through the same code path as `van.dot(np.asarray((ax2 - ax1).flat))`, which calls the `__array__` attribute of the flatiter object. If it's faster than .ravel(), that is surprising. -- Pauli Virtanen
On Tue, Jul 23, 2013 at 10:36 AM, Pauli Virtanen <pav@iki.fi> wrote:
23.07.2013 19:22, Charles R Harris kirjoitti: [clip]
Grepping in my code, I find a lot of things like
dfx = van.dot((ax2 - ax1).flat)
IIRC, the flat version was faster than other methods.
That goes through the same code path as `van.dot(np.asarray((ax2 - ax1).flat))`, which calls the `__array__` attribute of the flatiter object. If it's faster than .ravel(), that is surprising.
Well, I never use ravel, there are zero examples in my code ;) So you may be correct. I'm not sure the example I gave is the one where '*.flat' wins, but I recall such a case and have just used flat a lot ever since. Chuck
On Tue, Jul 23, 2013 at 1:05 PM, Charles R Harris <charlesr.harris@gmail.com> wrote:
On Tue, Jul 23, 2013 at 10:36 AM, Pauli Virtanen <pav@iki.fi> wrote:
23.07.2013 19:22, Charles R Harris kirjoitti: [clip]
Grepping in my code, I find a lot of things like
dfx = van.dot((ax2 - ax1).flat)
IIRC, the flat version was faster than other methods.
That goes through the same code path as `van.dot(np.asarray((ax2 - ax1).flat))`, which calls the `__array__` attribute of the flatiter object. If it's faster than .ravel(), that is surprising.
Well, I never use ravel, there are zero examples in my code ;) So you may be correct.
I'm not sure the example I gave is the one where '*.flat' wins, but I recall such a case and have just used flat a lot ever since.
Chuck
just another survey scipy: ravel: 136 (including stats) flat: 6 flatten: 37 (not current master) statsmodels ravel: 137, flat: 0 flatten: 9 I only use ravel (what am I supposed to do with an iterator if I want a view?) (I think the equivalent of matlab x(:) is x.ravel("F") not flat or flatten) Josef
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Jul 23, 2013 at 6:09 AM, Pauli Virtanen <pav@iki.fi> wrote:
The .H property has been implemented in Numpy matrices and Scipy's sparse matrices for many years.
Then we're done. Numpy is an array package, NOT a matrix package, and while you can implement matrix math with arrays (and we do), having quick and easy mnemonics for common matrix math operations (but uncommon general purpose array operations) is not eh job of numpy. That's what the matrix object is for. Yes, I know the matrix object isn't really what it should be, and doesn't get much use, but if you want something that is natural for doing matrix math, and particularly natural for teaching it -- that's what it's for -- work to make it what it could be, rather than polluting numpy with this stuff. One of the things I've loved about numpy after moving from MATLAB is that matrixes are second-class citizens, not the other way around. (OK, I'll go away now....) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On Wed, Jul 24, 2013 at 2:15 AM, Chris Barker - NOAA Federal <chris.barker@noaa.gov> wrote:
On Tue, Jul 23, 2013 at 6:09 AM, Pauli Virtanen <pav@iki.fi> wrote:
The .H property has been implemented in Numpy matrices and Scipy's sparse matrices for many years.
Then we're done. Numpy is an array package, NOT a matrix package, and while you can implement matrix math with arrays (and we do), having quick and easy mnemonics for common matrix math operations (but uncommon general purpose array operations) is not eh job of numpy. That's what the matrix object is for.
I would argue that the ship sailed when we added .T already. Most users see no difference between the addition of .T and .H. The matrix class should probably be deprecated and removed from NumPy in the long run--being a second class citizen not used by the developers themselves is not sustainable. And, now that we have "dot" as a method, there's very little advantage to it. Stéfan
On Wed, Jul 24, 2013 at 8:53 AM, Stéfan van der Walt <stefan@sun.ac.za> wrote:
On Wed, Jul 24, 2013 at 2:15 AM, Chris Barker - NOAA Federal <chris.barker@noaa.gov> wrote:
On Tue, Jul 23, 2013 at 6:09 AM, Pauli Virtanen <pav@iki.fi> wrote:
The .H property has been implemented in Numpy matrices and Scipy's sparse matrices for many years.
Then we're done. Numpy is an array package, NOT a matrix package, and while you can implement matrix math with arrays (and we do), having quick and easy mnemonics for common matrix math operations (but uncommon general purpose array operations) is not eh job of numpy. That's what the matrix object is for.
I would argue that the ship sailed when we added .T already. Most users see no difference between the addition of .T and .H.
The matrix class should probably be deprecated and removed from NumPy in the long run--being a second class citizen not used by the developers themselves is not sustainable. And, now that we have "dot" as a method, there's very little advantage to it.
Stéfan
Maybe this is the point where one just needs to do a poll. And finally someone has to make the decision. I feel that adding a method .H() would be the compromise ! Alan, could you live with that ? It is short enough and still emphasises the fact that it is NOT a view and therefore behaves sensitively different in certain scenarios as .T . It also leaves the door open to adding an iterator .H attribute later on without introducing the above mentioned code breaks. Who could make (i.e. is willing to make) the decision ? (( I would not open the discussion about ndarray vs. matrix -- it gets far to involving and we would be talking about far-future directions instead of "a single letter addition", which abvious already has big enough support and had so years ago)) Regards, Sebastian Haase
On Wed, Jul 24, 2013 at 9:15 AM, Sebastian Haase <seb.haase@gmail.com> wrote:
I feel that adding a method .H() would be the compromise !
Thinking about this more, I think it would just confuse most users... why .T and not .H; then you have to start explaining the underlying implementation detail. For users who already understand the implementation detail, finding .T.conj() would not be too hard.
(( I would not open the discussion about ndarray vs. matrix -- it gets far to involving and we would be talking about far-future directions instead of "a single letter addition", which abvious already has big enough support and had so years ago))
I am willing to write up a NEP if there's any interest. The plan would be to remove the Matrix class from numpy over two or three releases, and publish it as a separate package on PyPi. Stéfan
x = np.arange(12).reshape(3,4) x array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) np.may_share_memory(x, x.T) True np.may_share_memory(x, x.conj()) True y = x + 1j np.may_share_memory(y, y.conj()) False y.dtype
x.conj().dtype
I think a H is feature creep and too specialized What's .H of a int a str a bool ? It's just .T and a view, so you cannot rely that conj() makes a copy if you don't work with complex. .T is just a reshape function and has **nothing** to do with matrix algebra. dtype('complex128') dtype('int32') Josef On Wed, Jul 24, 2013 at 3:30 AM, Stéfan van der Walt <stefan@sun.ac.za> wrote:
On Wed, Jul 24, 2013 at 9:15 AM, Sebastian Haase <seb.haase@gmail.com> wrote:
I feel that adding a method .H() would be the compromise !
Thinking about this more, I think it would just confuse most users... why .T and not .H; then you have to start explaining the underlying implementation detail. For users who already understand the implementation detail, finding .T.conj() would not be too hard.
(( I would not open the discussion about ndarray vs. matrix -- it gets far to involving and we would be talking about far-future directions instead of "a single letter addition", which abvious already has big enough support and had so years ago))
I am willing to write up a NEP if there's any interest. The plan would be to remove the Matrix class from numpy over two or three releases, and publish it as a separate package on PyPi.
Stéfan _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
<josef.pktd <at> gmail.com> writes:
I think a H is feature creep and too specialized
What's .H of a int a str a bool ?
It's just .T and a view, so you cannot rely that conj() makes a copy if you don't work with complex.
.T is just a reshape function and has **nothing** to do with matrix
algebra.
It seems to me that that ship has already sailed - i.e. conj doesn't make much sense for str arrays, but it still works in the sense that it's a nop In [16]: A = asarray(list('abcdefghi')).reshape(3,3) ...: np.all(A.T == A.conj().T) ...: Out[16]: True If we're voting my vote goes to add the .H attribute for all the reasons Alan has specified. Document that it returns a copy but that it may in future return a view so it it not future proof to operate on the result inplace. I'm -1 on .H() as it will require code changes if it ever changes to a property and it will simply result in questions about why .T is a property and .H is a function (and why it's a property for (sparse) matrices) Regarding Dag's example: xh = x.H x *= 2 assert np.all(2 * xh == x.H) I'm sceptical that there's much code out there actually relying on the fact that a transpose is a view with the specified intention of altering the original array inplace. I work with a lot of beginners and whenever I've seen them operate inplace on a transpose it has been a bug in the code, leading to a discussion of how, for performance reasons, numpy will return a view where possible, leading to yet further discussion of when it is and isn't possible to return a view. The third option of .H returning a view would probably be agreeable to everyone but I don't think we should punt on this decision for something that if it does happen is likely years away. It seems that work on this front is happening in different projects to numpy. Even if for example sometime in the future numpy's internals were replaced with libdynd or other expression graph engine surely this would result in more breaking changes than .H returning a view rather than a copy?! IANAD so I'm happy with whatever the consensus is I just thought I'd put forward the view from a (specific type of) user perspective. Regards, Dave
On Wed, Jul 24, 2013 at 9:23 AM, Dave Hirschfeld <dave.hirschfeld@gmail.com> wrote:
If we're voting my vote goes to add the .H attribute for all the reasons Alan has specified. Document that it returns a copy but that it may in future return a view so it it not future proof to operate on the result inplace.
As soon as you talk about attributes "returning" things you've already broken Python's mental model... attributes are things that sit there, not things that execute arbitrary code. Of course this is not how the actual implementation works, attribute access *can* in fact execute arbitrary code, but the mental model is important, so we should preserve it where-ever we can. Just mentioning an attribute should not cause unbounded memory allocations. Consider these two expressions: x = solve(dot(arr, arr.T), arr.T) x = solve(dot(arr, arr.H), arr.H) Mathematically, they're very similar, and the mathematics-like notation does a good job of expressing that similarity while hiding mathematically irrelevant details. Which is what mathematical notation is for. But numpy isn't a toolkit for writing mathematical formula, it's a toolkit for writing computational algorithms that implement mathematical formula, and algorithmically, those two expressions are radically different. The first one allocates one temporary (the result from 'dot'); the second one allocates 3 temporaries. The second one is gratuitously inefficient, since two of those temporaries are identical, but they're being computed twice anyway.
I'm sceptical that there's much code out there actually relying on the fact that a transpose is a view with the specified intention of altering the original array inplace.
I work with a lot of beginners and whenever I've seen them operate inplace on a transpose it has been a bug in the code, leading to a discussion of how, for performance reasons, numpy will return a view where possible, leading to yet further discussion of when it is and isn't possible to return a view.
The point isn't that there's code that relies specifically on .T returning a view. It's that to be a good programmer, you need to *know whether* it returns a view -- exactly as you say in the second paragraph. And a library should not hide these kinds of details. -n
Nathaniel Smith <njs <at> pobox.com> writes:
As soon as you talk about attributes "returning" things you've already broken Python's mental model... attributes are things that sit there, not things that execute arbitrary code. Of course this is not how the actual implementation works, attribute access *can* in fact execute arbitrary code, but the mental model is important, so we should preserve it where-ever we can. Just mentioning an attribute should not cause unbounded memory allocations.
Yep, sorry - sloppy use of terminology which I agree is important in helping understand what's happening. -Dave
An idea: If .H is ideally going to be a view, and we want to keep it this way, we could have a .h() method with the present implementation. This would preserve the name .H for the conjugate view --when someone finds the way to do it. This way we would increase the readability, simplify some matrix algebra code, and keep the API consistency. On 24 July 2013 13:08, Dave Hirschfeld <dave.hirschfeld@gmail.com> wrote:
Nathaniel Smith <njs <at> pobox.com> writes:
As soon as you talk about attributes "returning" things you've already broken Python's mental model... attributes are things that sit there, not things that execute arbitrary code. Of course this is not how the actual implementation works, attribute access *can* in fact execute arbitrary code, but the mental model is important, so we should preserve it where-ever we can. Just mentioning an attribute should not cause unbounded memory allocations.
Yep, sorry - sloppy use of terminology which I agree is important in helping understand what's happening.
-Dave
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Jul 24, 2013 at 8:47 AM, Daπid <davidmenhur@gmail.com> wrote:
An idea:
If .H is ideally going to be a view, and we want to keep it this way, we could have a .h() method with the present implementation. This would preserve the name .H for the conjugate view --when someone finds the way to do it.
This way we would increase the readability, simplify some matrix algebra code, and keep the API consistency.
I could get behind a .h() method until .H attribute is ready. +1 Cheers! Ben Root
On Wed, Jul 24, 2013 at 12:54 PM, Nathaniel Smith <njs@pobox.com> wrote:
The point isn't that there's code that relies specifically on .T returning a view. It's that to be a good programmer, you need to *know whether* it returns a view -- exactly as you say in the second paragraph. And a library should not hide these kinds of details.
After listening to the arguments by yourself and Dag, I think I buy into the idea that we should hold off on this until we have ufunc views or something similar implemented. Also, if we split off the matrix package, we can give other people who really care about that (perhaps Alan is interested?) ownership, and let them run with it (I mainly use ndarrays myself). Stéfan
On Wed, Jul 24, 2013 at 8:30 AM, Stéfan van der Walt <stefan@sun.ac.za> wrote:
I am willing to write up a NEP if there's any interest. The plan would be to remove the Matrix class from numpy over two or three releases, and publish it as a separate package on PyPi.
Please do! There are some sticky issues to work through (e.g. how to deprecate the "matrix" entry in the numpy namespace, what to do with scipy.sparse), and I don't know whether we'll decide to go through with it in the end, but the way to figure that out is to, you know, work through them :-). -n
plan would be to remove the Matrix class from numpy over two or three releases, and publish it as a separate package on PyPi.
Anyone willing to take ownership of it? Maybe we should still do it of not-- at least it will make it clear that it is orphaned. Though one plus to having matrix in numpy is that it was a testbed for ndarray subclassing... -Chris
Please do! There are some sticky issues to work through (e.g. how to deprecate the "matrix" entry in the numpy namespace, what to do with scipy.sparse), and I don't know whether we'll decide to go through with it in the end, but the way to figure that out is to, you know, work through them :-).
-n _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 7/24/2013 3:15 AM, Sebastian Haase wrote:
I feel that adding a method .H() would be the compromise !
Alan, could you live with that ?
I feel .H() now would get in the way of a .H attribute later, which some have indicated could be added as an iterative view in a future numpy. I'd rather wait for that. My assessment of the conversation so far: there is not adequate support for a .H attribute until it can be an iterative view. I believe that almost everyone (possibly not Josef) would accept or want a .H attribute if it could provide an iterative view. (Is that correct?) So I'll drop out of the conversation, but I hope the user interest that has been displayed stimulates interest in that feature request. Thanks to everyone who shared their perspective on this issue. And my apologies to those (e.g., Dag) whom I annoyed by being too bullheaded. Cheers, Alan
On Jul 23, 2013, at 11:54 PM, "Stéfan van der Walt" <stefan@sun.ac.za> wrote:
The .H property has been implemented in Numpy matrices and Scipy's sparse matrices for many years.
Then we're done. Numpy is an array package, NOT a matrix package, and while you can implement matrix math with arrays (and we do), having quick and easy mnemonics for common matrix math operations (but uncommon general purpose array operations) is not eh job of numpy. That's what the matrix object is for.
I would argue that the ship sailed when we added .T already. Most users see no difference between the addition of .T and .H.
I don't know who can speak for "most users", but I see them quite differently. Transposing is a common operation outside of linear algebra--I, for one, use it to work with image arrays, which are often stored in a way by image libraries that is the transpose of the "natural" numpy way. But anyway, just because we have one domain-specific convenience attribute, doesn't mean we should add them all.
The matrix class should probably be deprecated and removed from NumPy in the long run--being a second class citizen not used by the developers themselves is not sustainable.
I agree, but the fact that no one has stepped up to maintain and improve it tells me that there is not a very large community that wants a clean linear algebra interface, not that we should try to build such an interface directly into numpy. Is there really a point to a clean interface to the Hermetian transpose, but not plain old matrix multiply?
And, now that we have "dot" as a method,
Agh, but "dot" is a method--so we still don't have a clean relationship with the math in text books: AB => A.dot(B) Anyway, adding .H is clearly not a big deal, I just don't think it's going to satisfy anyone anyway. -Chris
On Tue, Jul 23, 2013 at 9:35 AM, Dag Sverre Seljebotn <d.s.seljebotn@astro.uio.no> wrote:
I don't think this is obvious at all. In fact, I'd fully expect A.H to return a view that conjugates the values on the fly as they are read/written (just the same way the array is "transposed on the fly" or "sliced on the fly" with other views).
There's lots of uses for A.H to be a conjugating-view, e.g., np.dot(A.H, A) can be done on-the-fly by BLAS at no extra cost, and so on. These are currently not possible with pure NumPy without a copy, which is a pretty big defect IMO (and one reason I'd call BLAS myself using Cython rather than use np.dot...)
I was skeptical about this at first on the grounds that yeah, it'd be nice if at some point we allowed for on-the-fly transformations, it isn't happening anytime soon. But on second thought, we actually could implement this pretty easily -- just define a new dtype "conjcomplex" that stores the value x+iy as two doubles (x, -y). Then complex_arr.view(conjcomplex) would preserve memory contents but invert the numeric sign of all imaginary components, while complex_arr.astype(conjcomplex) would preserve numeric value but alter the memory representation. Because this latter cast is safe, all the existing ufuncs would automatically work fine on conjcomplex arrays. But we could also define conjcomplex-specific ufunc loops for cases like dot() where a more efficient implementation is possible (using the above-mentioned BLAS flags). Don't know if we want to actually do this, but it's doable. (I don't have any in-principle objection to .H(), but won't it just lead to more threads complaining about how confusing it is that .T and .H() are different?) -n
On 23 Jul 2013 15:55, "Stéfan van der Walt" <stefan@sun.ac.za> wrote:
On Tue, Jul 23, 2013 at 4:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Don't know if we want to actually do this, but it's doable.
Would we need a matching conjugate data-type for each complex data-type then, or can the data-type be "parameterized"?
Right now dtypes can't be parametrized. In this particular case it doesn't matter a whole lot anyway I think - you'd have to write basically the same code to handle different width complex types in either case, the difference is just whether that code got called at runtime or build time. -n
23.07.2013 17:51, Nathaniel Smith kirjoitti: [clip: conjcomplex dtype]
Because this latter cast is safe, all the existing ufuncs would automatically work fine on conjcomplex arrays. But we could also define conjcomplex-specific ufunc loops for cases like dot() where a more efficient implementation is possible (using the above-mentioned BLAS flags).
Don't know if we want to actually do this, but it's doable.
There's somewhat a lot of 3rd party code that doesn't do automatic casting (e.g. all of Cython code interfacing with Numpy, C extensions, f2py I think), but rather fails for incompatible input dtypes. Having arrays with a new complex dtype around would require changes in this sort of code. In this sense having an iterator of some sort with an __array__ attribute would work. However, an iterator doesn't support (without a lot of work) the various ndarray attributes which would be confusing. -- Pauli Virtanen
On 23 Jul 2013 16:03, "Pauli Virtanen" <pav@iki.fi> wrote:
23.07.2013 17:51, Nathaniel Smith kirjoitti: [clip: conjcomplex dtype]
Because this latter cast is safe, all the existing ufuncs would automatically work fine on conjcomplex arrays. But we could also define conjcomplex-specific ufunc loops for cases like dot() where a more efficient implementation is possible (using the above-mentioned BLAS flags).
Don't know if we want to actually do this, but it's doable.
There's somewhat a lot of 3rd party code that doesn't do automatic casting (e.g. all of Cython code interfacing with Numpy, C extensions, f2py I think), but rather fails for incompatible input dtypes. Having arrays with a new complex dtype around would require changes in this sort of code.
In this sense having an iterator of some sort with an __array__ attribute would work. However, an iterator doesn't support (without a lot of work) the various ndarray attributes which would be confusing.
Surely there's more code that handles unusual but correctly castable dtypes dtypes than there is code that handles custom iterator objects that are missing ndarray attributes? -n
I'm trying to understand the state of this discussion. I believe that propoents of adding a .H attribute have primarily emphasized - readability (and general ease of use) - consistency with matrix and masked array - forward looking (to a future when .H can be a view) The opponents have primarily emphasized - inconsistency with convention that for arrays instance attributes should return views Is this a correct summary? If it is correct, I believe the proponents' case is stronger. All the considerations are valid, so it is a matter of deciding how to weight them. The alternative of offering a new method seems inferior in terms of readability and consistency, and it is not adequately forward looking. If the alternative is nevertheless chosen, I suggest that it should definitely *not* be .H(), both because of the conflict with uses by matrix and masked array, and because I expect that eventually the desire for an attribute will win the day, and it would be a shame for the obvious notation to be lost. Alan Isaac
On 07/23/2013 07:53 PM, Alan G Isaac wrote:
I'm trying to understand the state of this discussion. I believe that propoents of adding a .H attribute have primarily emphasized
- readability (and general ease of use) - consistency with matrix and masked array - forward looking (to a future when .H can be a view)
I disagree with this being forward looking, as it explicitly creates a situation where code will break if .H becomes a view, e.g.: xh = x.H x *= 2 assert np.all(2 * xh == x.H)
The opponents have primarily emphasized
- inconsistency with convention that for arrays instance attributes should return views
I'd formulate this as simply "inconsistency with .T"; they are both motivated primarily as notational shorthands. Dag Sverre
On Tue, Jul 23, 2013 at 5:08 PM, Dag Sverre Seljebotn <d.s.seljebotn@astro.uio.no> wrote:
On 07/23/2013 07:53 PM, Alan G Isaac wrote:
I'm trying to understand the state of this discussion. I believe that propoents of adding a .H attribute have primarily emphasized
- readability (and general ease of use) - consistency with matrix and masked array - forward looking (to a future when .H can be a view)
I disagree with this being forward looking, as it explicitly creates a situation where code will break if .H becomes a view, e.g.:
xh = x.H x *= 2 assert np.all(2 * xh == x.H)
The opponents have primarily emphasized
- inconsistency with convention that for arrays instance attributes should return views
I'd formulate this as simply "inconsistency with .T"; they are both motivated primarily as notational shorthands.
Do we really need a one letter shorthand for `a.conj().T` ? I don't. Josef (The one who wrote np.max(np.abs(y - x)) and np.max(np.abs(y / x - 1)) 30 or more times in the last 24 hours, in pdb.)
Dag Sverre _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 7/23/2013 5:32 PM, josef.pktd@gmail.com wrote:
Do we really need a one letter shorthand for `a.conj().T` ?
One way to assess this would be to propose removing it from matrix and masked array objects. If the yelping is loud enough, there is apparently need. I suspect the yelping would be pretty loud. Indeed, the reason I started this thread is that I'm using the matrix object less and less, and I definitely miss the .H attribute it offers. In any case, need is the wrong criterion. The question is, do the gains in readability, consistency (across objects), convenience, and advertising appeal (e.g., to those used to other languages) outweigh the costs? It's a cost benefit analysis. Obviously some people think the costs outweigh the benefits and others say they do not. We should look for a ways to determine which group has the better case. This discussion has made me much more inclined to believe it is a good idea to add this attribute. I agree that it would be an even better idea to add it as an iterative view, but nobody seems to feel that can happen quickly. Alan
On 7/23/2013 5:08 PM, Dag Sverre Seljebotn wrote:
I disagree with this being forward looking, as it explicitly creates a situation where code will break if .H becomes a view
Well yes, we cannot have everything. Just like it is taking a while for ``diagonal`` to transition to providing a view, this would be true for .H when the time comes. Naturally, this would be documented (that it may change to a view). Just as it is documented with ``diagonal``. But it is nevertheless forward looking in an obvious sense: it provides access to an extremely convenient and much more readable notation that will in any case eventually be available. Also, the current context is the matrices and masked arrays have this attribute, so this transitional issue already exists. Out of curiosity: do you use NumPy to work with complex arrays? Alan
On Tue, Jul 23, 2013 at 4:35 AM, Dag Sverre Seljebotn <d.s.seljebotn@astro.uio.no> wrote: ...
There's lots of uses for A.H to be a conjugating-view, e.g., np.dot(A.H, A) can be done on-the-fly by BLAS at no extra cost, and so on. These are currently not possible with pure NumPy without a copy, which is a pretty big defect IMO (and one reason I'd call BLAS myself using Cython rather than use np.dot...)
Wouldn't the simpler way not just be to expose those linalg functions? hdot(X, Y) == dot(X.T, Y) (if not complex) == dot(X.H, Y) (if complex) Josef
participants (18)
-
Alan G Isaac -
Benjamin Root -
Bryan Van de Ven -
Charles R Harris -
Chris Barker - NOAA Federal -
Dag Sverre Seljebotn -
Dave Hirschfeld -
Daπid -
Fernando Perez -
Jerome Kieffer -
josef.pktd@gmail.com -
Nathaniel Smith -
Pauli Virtanen -
Ralf Gommers -
Sebastian Berg -
Sebastian Haase -
Stéfan van der Walt -
Toder, Evgeny