Re: [Numpydiscussion] numpy.mean still broken for large float32arrays
True, i suppose there is no harm in accumulating with max precision, and storing the result in the Original dtype, unless otherwise specified, although i wonder if the current nditer supports such behavior.
Original Message From: "Alan G Isaac" alan.isaac@gmail.com Sent: 2472014 18:09 To: "Discussion of Numerical Python" numpydiscussion@scipy.org Subject: Re: [Numpydiscussion] numpy.mean still broken for large float32arrays
On 7/24/2014 5:59 AM, Eelco Hoogendoorn wrote to Thomas:
np.mean isn't broken; your understanding of floating point number is.
This comment seems to conflate separate issues: the desirable return type, and the computational algorithm. It is certainly possible to compute a mean of float32 doing reduction in float64 and still return a float32. There is nothing implicit in the name `mean` that says we have to just add everything up and divide by the count.
My own view is that `mean` would behave enough better if computed as a running mean to justify the speed loss. Naturally similar issues arise for `var` and `std`, etc. See http://www.johndcook.com/standard_deviation.html for some discussion and references.
Alan Isaac _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
participants (1)

Eelco Hoogendoorn