I would like to try to reach a consensus about a long standing inconsistent behavior of reduceat() reported and discussed here
In summary, it seems an elegant and logical design choice, that all users will expect, for
out = ufunc.reduceat(a, indices)
to produce, for all indices j (except for the last one) the following
out[j] = ufunc.reduce(a[indices[j]:indices[j+1]])
However the current documented and actual behavior is for the case
indices[i] >= indices[i+1]
to return simply
out[j] = a[indices[i]]
I cannot see any application where this behavior is useful or where this choice makes sense. This seems just a bug that should be fixed.
What do people think?
PS: A quick fix for the current implementation is
out = ufunc.reduceat(a, indices) out *= np.diff(indices) > 0
Discussion on-going at the above issue, but perhaps worth mentioning more broadly the alternative of adding a slice argument (or start, stop, step arguments) to ufunc.reduce, which would mean we can just deprecate reduceat altogether, as most use of it would just be
add.reduce(array, slice=slice(indices[:-1], indices[1:])
(where now we are free to make the behaviour match what is expected for an empty slice)
Here, one would broadcast the slice if it were 0-d, and could pass in tuples of slices if a tuple of axes was used.