
On Tue, Jan 24, 2012 at 4:21 PM, Kathleen M Tacina < Kathleen.M.Tacina@nasa.gov> wrote:
** I found something similar, with a very simple example.
On 64-bit linux, python 2.7.2, numpy development version:
In [22]: a = 4000*np.ones((1024,1024),dtype=np.float32)
In [23]: a.mean() Out[23]: 4034.16357421875
In [24]: np.version.full_version Out[24]: '2.0.0.dev-55472ca'
But, a Windows XP machine running python 2.7.2 with numpy 1.6.1 gives:
a = np.ones((1024,1024),dtype=np.float32) a.mean() 4000.0 np.version.full_version '1.6.1'
Yes, the results are platform/compiler dependent. The 32 bit platforms tend to use extended precision accumulators and the x87 instruction set. The 64 bit platforms tend to use sse2+. Different precisions, even though you might think they are the same. <snip> Chuck