I'm trying to understand numpy.subtract.reduce. The documentation doesn't seem to match the behavior. The documentation claims For a onedimensional array, reduce produces results equivalent to: r = op.identity for i in xrange(len(A)): r = op(r,A[i]) return r However, numpy.subtract.reduce([1,2,3]) gives me 123==4, not 0123==6. Now, I'm on an older version (1.3.0), which might be the problem, but which is "correct" here, the code or the docs? Thanks, Johann
I get the same result on 1.4.1 On Thu, Jul 22, 2010 at 1:00 PM, Johann Hibschman < jhibschman+numpy@gmail.com <jhibschman%2Bnumpy@gmail.com>> wrote:
I'm trying to understand numpy.subtract.reduce. The documentation doesn't seem to match the behavior. The documentation claims
For a onedimensional array, reduce produces results equivalent to:
r = op.identity for i in xrange(len(A)): r = op(r,A[i]) return r
However, numpy.subtract.reduce([1,2,3]) gives me 123==4, not 0123==6.
Now, I'm on an older version (1.3.0), which might be the problem, but which is "correct" here, the code or the docs?
Thanks, Johann
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
John Salvatier wrote:
I get the same result on 1.4.1
On Thu, Jul 22, 2010 at 1:00 PM, Johann Hibschman <jhibschman+numpy@gmail.com <mailto:jhibschman%2Bnumpy@gmail.com>> wrote:
I'm trying to understand numpy.subtract.reduce. The documentation doesn't seem to match the behavior. The documentation claims
For a onedimensional array, reduce produces results equivalent to:
r = op.identity for i in xrange(len(A)): r = op(r,A[i]) return r
However, numpy.subtract.reduce([1,2,3]) gives me 123==4, not 0123==6.
Now, I'm on an older version (1.3.0), which might be the problem, but which is "correct" here, the code or the docs?
numpy.divide.reduce has the same "problem". If the docstring is correct, then numpy.divide.reduce([2.0, 2.0]) should be 0.25, but In [13]: np.divide.reduce([2.0, 2.0]) Out[13]: 1.0 Instead of <identity> op <val0> op <val1> op ... it appears to compute <val0> op <val1> op ... Warren
Thanks, Johann
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org <mailto:NumPyDiscussion@scipy.org> http://mail.scipy.org/mailman/listinfo/numpydiscussion

_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On 7/22/2010 4:00 PM, Johann Hibschman wrote:
I'm trying to understand numpy.subtract.reduce. The documentation doesn't seem to match the behavior. The documentation claims
For a onedimensional array, reduce produces results equivalent to:
r = op.identity for i in xrange(len(A)): r = op(r,A[i]) return r
However, numpy.subtract.reduce([1,2,3]) gives me 123==4, not 0123==6.
The behavior does not quite match Python's reduce. The rule seems to be: return the *right identity* for empty arrays, otherwise behave like Python's reduce. >>> import operator as o >>> reduce(o.sub, [1,2,3], 0) 6 >>> reduce(o.sub, [1,2,3]) 4 >>> reduce(o.sub, []) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: reduce() of empty sequence with no initial value >>> np.subtract.reduce([]) 0.0 Getting a right identity for an empty array is surprising. Matching Python's behavior (raising a TypeError) seems desirable. (?) Unfortunately Python's reduce does not make ``initializer`` a keyword, but maybe NumPy could add this keyword anyway? (Not sure that's a good idea.) Alan Isaac
Fri, 23 Jul 2010 10:29:47 0400, Alan G Isaac wrote: [clip]
>>> np.subtract.reduce([]) 0.0
Getting a right identity for an empty array is surprising. Matching Python's behavior (raising a TypeError) seems desirable. (?)
I don't think matching Python's behavior is a sufficient argument for a change. As far as I see, it'd mostly cause unnecessary breakage, with no significant gain. Besides, it's rather common to define sum_{z in Z} z = 0 prod_{z in Z} z = 1 if Z is an empty set  this can then be extended to other reduction operations. Note that changing reduce behavior would require us to specialcase the above two operations.  Pauli Virtanen
Fri, 23 Jul 2010 10:29:47 0400, Alan G Isaac wrote:
>>> np.subtract.reduce([]) 0.0
Getting a right identity for an empty array is surprising. Matching Python's behavior (raising a TypeError) seems desirable. (?)
On 7/23/2010 10:37 AM, Pauli Virtanen wrote:
I don't think matching Python's behavior is a sufficient argument for a change. As far as I see, it'd mostly cause unnecessary breakage, with no significant gain.
Besides, it's rather common to define
sum_{z in Z} z = 0 prod_{z in Z} z = 1
if Z is an empty set  this can then be extended to other reduction operations. Note that changing reduce behavior would require us to specialcase the above two operations.
To reduce (pun intended) surprise is always a significant gain. I don't understand the notion of "extend" you introduce here. The natural "extension" is to take a start value, as with Python's ``reduce``. Providing a default start value is natural for operators with an identity and is not for those without, and correspondingly we end up with ``sum`` and ``prod`` functions (which match reduce with the obvious default start value) but no equivalents for subtraction and division. I also do not understand why there would have to be any special cases. Returning a *right* identity for an operation that is otherwise a *left* fold is very odd, no matter how you slice it. That is what looks like special casing ... Alan Isaac
Fri, 23 Jul 2010 11:17:56 0400, Alan G Isaac wrote: [clip]
I also do not understand why there would have to be any special cases.
That's a technical issue: e.g. prod() is implemented via np.multiply.reduce, and it is not clear to me whether it is possible, in the ufunc machinery, to leave the identity undefined or whether it is needed in some code paths (as the right identity). It's possible to define binary Ufuncs without an identity element (e.g. scipy.special.beta), so in principle the machinery to do the right thing is there.
Returning a *right* identity for an operation that is otherwise a *left* fold is very odd, no matter how you slice it. That is what looks like special casing...
I think I see your point now.  Pauli Virtanen
Pauli Virtanen <pav@iki.fi> writes:
Returning a *right* identity for an operation that is otherwise a *left* fold is very odd, no matter how you slice it. That is what looks like special casing...
I think I see your point now.
I know this is unlikely to happen, since it would break things for a mostlycosmetic (and that probably only in my eyes) improvement, but if reduce were defined as a *right* fold, then it would make sense for subtract (and divide) to use the right identity. A right fold is also perhaps more interesting, since it can be used to do alternating series, while the regular leftfold of subtract is pretty pointless. It also seems more natural that subtract.reduce([]) returns 0, because then we can partition the sequence however we want and preserve np.subtract.reduce(np.append(x[:i], np.subtract.reduce(x[i:]))) == np.subtract.reduce(x) for any i. But that's really just idle musing. This is, by the way, how J (the APLderived array language) does it, but there it's very natural to do right folds since all operations are rightassociative. Cheers, Johann
On 7/26/2010 9:41 AM, Johann Hibschman wrote:
if reduce were defined as a *right* fold, then it would make sense for subtract (and divide) to use the right identity
Instead of deviating from the Python definition of reduce, it would imo make more sense to introduce new functions, sayfoldl and fodlr. Alan Isaac
participants (5)

Alan G Isaac

Johann Hibschman

John Salvatier

Pauli Virtanen

Warren Weckesser