On Thu, Sep 6, 2012 at 1:41 AM, Sebastian Berg
Hey,
No idea if this is simply not support or just a bug, though I am guessing that such usage simply is not planned.
I think that's right... currently numpy simply makes no guarantees about what order ufunc loops will be performed in, or even if they will be performed in any strictly sequential order. In ordinary cases this lets it make various optimizations, but it means that you can't count on any specific behaviour for the unusual case where different locations in the output array are stored in overlapping memory. Fixing this would require two things: (a) Some code to detect when an array may have internal overlaps (sort of like np.may_share_memory for axes). Not entirely trivial. (b) A "fallback mode" for ufuncs where if the code in (a) detects that we are (probably) dealing with one of these arrays, it processes the operations in some predictable order without buffering. I suppose if someone wanted to come up with these two pieces, and it didn't look like it would cause slowdowns in common cases, the code in (b) avoided creating duplicate code paths that increased maintenance burden, etc., then probably no-one would object to making these arrays act in a better defined way? I don't think most people are that worried about this though. Your original code would be much clearer if it just used np.sum... -n