[Numpy-discussion] could anyone check on a 32bit system?

Sebastian Berg sebastian at sipsolutions.net
Wed May 1 05:08:15 EDT 2013


On Wed, 2013-05-01 at 11:00 +0200, Sebastian Berg wrote:
> On Tue, 2013-04-30 at 22:20 -0700, Matthew Brett wrote:
> > Hi,
> > 
> > On Tue, Apr 30, 2013 at 9:16 PM, Matthew Brett <matthew.brett at gmail.com> wrote:
> > > Hi,
> > >
> > > On Tue, Apr 30, 2013 at 8:08 PM, Yaroslav Halchenko
> > > <lists at onerussian.com> wrote:
> > >> could anyone on 32bit system with fresh numpy (1.7.1) test following:
> > >>
> > >>> wget -nc http://www.onerussian.com/tmp/data.npy ; python -c 'import numpy as np; data1 = np.load("/tmp/data.npy"); print  np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'
> > >>
> > >> 0.0
> > >>
> > >> because unfortunately it seems on fresh ubuntu raring (in 32bit build only,
> > >> seems ok in 64 bit... also never ran into it on older numpy releases):
> > >>
> > >>> python -c 'import numpy as np; data1 = np.load("/tmp/data.npy"); print  np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'
> > >> -1.11022302463e-16
> > >>
> > >> PS detected by failed tests of pymvpa
> > >
> > > Reduced case on numpy 1.7.1, 32-bit Ubuntu 12.04.2
> > >
> > > In [64]: data = np.array([[ 0.49505185,  0.47212842],
> > >        [ 0.53529587,  0.04366172],
> > >        [-0.13461665, -0.01664215]])
> > >
> > > In [65]: np.sum(data[:, 0]) - np.sum(data, axis=0)[0]
> > > Out[65]: 1.1102230246251565e-16
> > >
> > > No difference for single vector:
> > >
> > > In [4]: data1 = data[:, 0:1]
> > >
> > > In [5]: np.sum(data1[:, 0]) - np.sum(data1, axis=0)[0]
> > > Out[5]: 0.0
> > 
> > Also true on current numpy trunk:
> > 
> > In [2]: import numpy as np
> > 
> > In [3]: np.__version__
> > Out[3]: '1.8.0.dev-a8805f6'
> > 
> > In [4]: data = np.array([[ 0.49505185,  0.47212842],
> >    ....:        [ 0.53529587,  0.04366172],
> >    ....:        [-0.13461665, -0.01664215]])
> > 
> > In [5]: np.sum(data[:, 0]) - np.sum(data, axis=0)[0]
> > Out[5]: 1.1102230246251565e-16
> > 
> > Not true on numpy 1.6.1:
> > 
> > In [2]: np.__version__
> > Out[2]: '1.6.1'
> > 
> > In [3]: data = np.array([[ 0.49505185,  0.47212842],
> >    ....:        [ 0.53529587,  0.04366172],
> >    ....:        [-0.13461665, -0.01664215]])
> > 
> > In [4]: np.sum(data[:, 0]) - np.sum(data, axis=0)[0]
> > Out[4]: 0.0
> > 
> 
> Puzzles me, I didn't think calculation order was different in both cases
> and optimized for the reduction part.  But maybe check the code, if it
> is optimized, it would calculate this more like `res += data[0]; res +=
> data[1]; res += data[2]` (for faster memory access), which would
> probably kill the extended registers (I don't know this hardware stuff,
> so might be wrong). One simple try hinting that this may be going on
> would be to data fortran order.
> 

Well I guess it is optimized and the reason I thought it was not, was
because it probably was not before 1.7. so I think this is probably the
reason.

> - Sebastian
> 
> > Cheers,
> > 
> > Matthew
> > _______________________________________________
> > NumPy-Discussion mailing list
> > NumPy-Discussion at scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> > 
> 
> 
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> 





More information about the NumPy-Discussion mailing list