Inaccurate and utterly wrong are subjective. If You want To Be sufficiently strict, floating point calculations are almost always 'utterly wrong'.

Granted, It would Be Nice if the docs specified the algorithm used. But numpy does not produce anything different than what a standard c loop or c++ std lib func would. This isn't a bug report, but rather a feature request. That said, support for fancy reduction algorithms would certainly be nice, if implementing it in numpy in a coherent manner is feasible.

Granted, It would Be Nice if the docs specified the algorithm used. But numpy does not produce anything different than what a standard c loop or c++ std lib func would. This isn't a bug report, but rather a feature request. That said, support for fancy reduction algorithms would certainly be nice, if implementing it in numpy in a coherent manner is feasible.

From: Joseph Martinot-Lagarde

Sent: 24-7-2014 20:04

To: numpy-discussion@scipy.org

Subject: Re: [Numpy-discussion] numpy.mean still broken for large float32arrays

> I don't agree. The problem is that I expect `mean` to do something

> reasonable. The documentation mentions that the results can be

> "inaccurate", which is a huge understatement: the results can be utterly

> wrong. That is not reasonable. At the very least, a warning should be

> issued in cases where the dtype might not be appropriate.

>

Maybe the problem is the documentation, then. If this is a common error,

it could be explicitly documented in the function documentation.

_______________________________________________

NumPy-Discussion mailing list

NumPy-Discussion@scipy.org

http://mail.scipy.org/mailman/listinfo/numpy-discussion