
Hi there- I have a question on how this difference arises when I run the following simple code on windows and Debian:
from numarray import * a=reshape(arange(81.0), (9,9))+identity(9) ca = matrixmultiply(transpose(a),a) import numarray.linear_algebra as la ica = la.inverse(ca) ica.min() -0.30951621414374736 ica.max() 0.8888918586585135
The above is the result in Debian and below is that in Windows:
ica.min() -0.30951621414449404 ica.max() 0.88889185865875686
So, what caused this difference (both machines have python2.4 and numarray 1.3.2)? The difference happens in the 10th/11th decimal point, which should be still in the floating point precision, right? This simple difference caused big differences in my later calculation. Best, Xiangyi
participants (1)
-
mengļ¼ are.berkeley.edu