[Numpy-discussion] bug in numpy.mean() ?

Charles R Harris charlesr.harris at gmail.com
Wed Jan 25 00:03:49 EST 2012


On Tue, Jan 24, 2012 at 4:21 PM, Kathleen M Tacina <
Kathleen.M.Tacina at nasa.gov> wrote:

> **
> I found something similar, with a very simple example.
>
> On 64-bit linux, python 2.7.2, numpy development version:
>
> In [22]: a = 4000*np.ones((1024,1024),dtype=np.float32)
>
> In [23]: a.mean()
> Out[23]: 4034.16357421875
>
> In [24]: np.version.full_version
> Out[24]: '2.0.0.dev-55472ca'
>
>
> But, a Windows XP machine running python 2.7.2 with numpy 1.6.1 gives:
> >>>a = np.ones((1024,1024),dtype=np.float32)
> >>>a.mean()
> 4000.0
> >>>np.version.full_version
> '1.6.1'
>
>
>
Yes, the results are platform/compiler dependent. The 32 bit platforms tend
to use extended precision accumulators and the x87 instruction set. The 64
bit platforms tend to use sse2+. Different precisions, even though you
might think they are the same.

<snip>

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20120124/c3c92e65/attachment.html>


More information about the NumPy-Discussion mailing list