[Numpy-discussion] numpy.mean still broken for large float32arrays

Eelco Hoogendoorn hoogendoorn.eelco at gmail.com
Thu Jul 24 16:42:53 EDT 2014


Inaccurate and utterly wrong are subjective. If You want To Be sufficiently strict,  floating point calculations are almost always 'utterly wrong'.

Granted, It would Be Nice if the docs specified the algorithm used. But numpy does not produce anything different than what a standard c loop or c++ std lib func would. This isn't a bug report, but rather a feature request. That said, support for fancy reduction algorithms would certainly be nice, if implementing it in numpy in a coherent manner is feasible. 

-----Original Message-----
From: "Joseph Martinot-Lagarde" <joseph.martinot-lagarde at m4x.org>
Sent: ‎24-‎7-‎2014 20:04
To: "numpy-discussion at scipy.org" <numpy-discussion at scipy.org>
Subject: Re: [Numpy-discussion] numpy.mean still broken for large float32arrays

Le 24/07/2014 12:55, Thomas Unterthiner a écrit :
> I don't agree. The problem is that I expect `mean` to do something
> reasonable. The documentation mentions that the results can be
> "inaccurate", which is a huge understatement: the results can be utterly
> wrong. That is not reasonable. At the very least, a warning should be
> issued in cases where the dtype might not be appropriate.
>
Maybe the problem is the documentation, then. If this is a common error, 
it could be explicitly documented in the function documentation.

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion at scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140724/5baca1f8/attachment.html>


More information about the NumPy-Discussion mailing list