
I got_misled_by (extrapolated erroneously from) this description of temporaries in the documentation http://docs.scipy.org/doc/numpy/user/basics.indexing.html#assigning-values-t... ,,,])]" ... new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times." It is talking about a slightly different scenario of course, the temporary corresponds to the LHS. Anyhow, as long as the behavior is defined rigorously it should not be a problem. Now, I vaguely remember abusing ufuncs and aliasing in interactive sessions for some weird cumsum like operations (I plead bashfully guilty). On Fri, Aug 7, 2015 at 1:38 PM, Sebastian Berg <sebastian@sipsolutions.net> wrote:
On Fr, 2015-08-07 at 13:14 +0530, srean wrote:
Wait, when assignments and slicing mix wasn't the behavior supposed to be equivalent to copying the RHS to a temporary and then assigning using the temporary. Is that a false memory ? Or has the behavior changed ? As long as the behavior is well defined and succinct it should be ok
No, NumPy has never done that as far as I know. And since SIMD instructions etc. make this even less predictable (you used to be able to abuse in-place logic, even if usually the same can be done with ufunc.accumulate so it was a bad idea anyway), you have to avoid it.
Pauli is working currently on implementing the logic needed to find if such a copy is necessary [1] which is very cool indeed. So I think it is likely we will such copy logic in NumPy 1.11.
- Sebastian
[1] See https://github.com/numpy/numpy/pull/6166 it is not an easy problem.
On Tuesday, July 28, 2015, Sebastian Berg <sebastian@sipsolutions.net> wrote:
On Mon Jul 27 22:51:52 2015 GMT+0200, Sturla Molden wrote: > On 27/07/15 22:10, Anton Akhmerov wrote: > > Hi everyone, > > > > I have encountered an initially rather confusing problem in a piece of > > code that attempted to symmetrize a matrix: `h += h.T` > > The problem of course appears due to `h.T` being a view of `h`, and > > some elements being overwritten during the __iadd__ call. >
I think the typical proposal is to raise a warning. Note there is np.may_share_memoty. But the logic to give the warning is possibly not quite easy, since this is ok to use sometimes. If someone figures it out (mostly) I would be very happy zo see such warnings.
> Here is another example > > >>> a = np.ones(10) > >>> a[1:] += a[:-1] > >>> a > array([ 1., 2., 3., 2., 3., 2., 3., 2., 3., 2.]) > > I am not sure I totally dislike this behavior. If it could be made > constent it could be used to vectorize recursive algorithms. In the case > above I would prefer the output to be: > > array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]) > > It does not happen because we do not enforce that the result of one > operation is stored before the next two operands are read. The only way > to speed up recursive equations today is to use compiled code. > > > Sturla > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion