![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
I am using Polynomial.py from Scientific Python 2.1, together with Numeric 17.1.2. This has always served me well, but now we are busy upgrading our software, and I am currently porting some code to Scientific Python 2.4.1, Numeric 22.0. Suddenly I do no longer manage to get proper 2D polynomial fits over 4x4th order. At 5x5 the coefficients that come back from LinearAlgebra.linear_least_squares have exploded. In the old setup, I easily managed 9x9th order if I needed to, but most of the time I'd stop at 6x6th order. Would anyone have any idea how this difference can come about? I managed to work around this for the moment by using the equivalent code in the fitPolynomial routine that uses LinearAlgebra.generalized_inverse (and it doesn't even have problems with the same data at 8x8), but this definitely feels not right! I can't remember reading anything like this here before. Together with Konrad Hinsen, I came to the conclusion that the problem is not in Scientific Python, so it must be the underlying LinearAlgebra code that changed between releases 17 and 22. I hacked up a simplified example. Not sure whether it is the most simple case, but this resembles what I have in my code, and I'm quite sure it worked with Numeric 17.x, but currently it is horrible over order (4,4): -------------------------------------- import Numeric def func(x,y): return x+0.1*x**2+0.01*x**4+0.002*x**6+0.03*x*y+0.001*x**4*y**5 x=[] y=[] z=[] for dx in Numeric.arange(0,1,0.01): for dy in Numeric.arange(0,1,0.01): x.append(dx) y.append(dy) z.append(func(dx,dy)) from Scientific.Functions import Polynomial data=Numeric.transpose([x,y]) z=Numeric.array(z) for i in range(10): print data[i],z[i] pol=Polynomial.fitPolynomial((4,4),data,z) print pol.coeff ------------------------------------ for 4,4 this prints: [[ 1.84845529e-05 -7.60502772e-13 2.71314749e-12 -3.66731796e-12 1.66977148e-12] [ 9.99422967e-01 3.00000000e-02 -3.26346097e-11 4.42406519e-11 -2.01549767e-11] [ 1.03899464e-01 -3.19668064e-11 1.14721790e-10 -1.55489826e-10 7.08425891e-11] [ -9.40275000e-03 4.28456838e-11 -1.53705205e-10 2.08279772e-10 -9.48840470e-11] [ 1.80352695e-02 -1.10999843e-04 8.00662570e-04 -2.17266676e-03 2.47500004e-03]] for 5,5: [[ -2.25705839e+03 6.69051337e+02 -6.60470163e+03 6.66572425e+03 -8.67897022e+02 1.83974866e+03] [ -2.58646837e+02 -2.46554689e+03 1.15965805e+03 7.01089888e+03 -2.11395436e+03 2.10884815e+03] [ 3.93307499e+03 4.34484805e+02 -4.84080392e+03 5.90375330e+03 1.16798049e+03 -4.14163933e+03] [ 1.62814750e+03 2.08717457e+03 1.15870693e+03 -3.37838057e+03 3.49821689e+03 5.80572585e+03] [ 4.54127557e+02 -1.56645524e+03 4.58997025e+00 1.69772635e+03 -1.37751039e+03 -7.59726558e+02] [ 2.37878239e+03 9.43032094e+02 8.58518644e+02 -8.35846339e+03 -5.55845668e+02 1.87502761e+03]] Which is clearly wrong. I appreciate any help! Regards, Rob -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/
![](https://secure.gravatar.com/avatar/7dd64dbe40f809b9f8276bde56c872fd.jpg?s=120&d=mm&r=g)
I tested the problem with: 1. Numeric 23.1 under python 2.3.2 2. numarray 0.8 (I made a copy of the Scientific package where all calls to Numeric were replaced to numarray), under python 2.3.2 There results where about the same -- high coefficients for the 5th order polynomials. I would expect reliable fit for a high order polynomials only under very special circumstances, so this is not a big surprise. My advice is: * Make sure that this is a bug and not a result of a numerical instability. If you can trace it down and point to a bug, then report it. The numarray package is very usable and is under a very active and rapid development, thus bugs are being fixed fast. * Look for a solution in the scipy package: It is generally better then Scientific. * Polynomials fit is relatively very simple --- you may write one of you own in less then a one day work. Since, as I said, the problem is, in many cases, unstable, you'll have the chance to implement more stable linear-equation solvers. Nadav. On Fri, 2003-10-31 at 15:19, Rob W.W. Hooft wrote:
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
Nadav Horesh wrote:
Thanks for your efforts. The polynomial we're trying to fit here is not extremely unstable. As I said, with Numeric 17.1.2 my class of problems used to be stable up to at least 9th order. I really suspect a bug was introduced here which is difficult to pinpoint because everybody reacts the natural way: this is an intrinsicly unstable problem, so it is not unexpected. Somehow it could have been better, though! I managed to work around the problem so far by using a different solver also built in to Scientific Python, so I am saved so far. Regards, Rob -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/
![](https://secure.gravatar.com/avatar/7dd64dbe40f809b9f8276bde56c872fd.jpg?s=120&d=mm&r=g)
Many unstable problems have a stable solution if you choose the right algorithm. The question is if somewhere the developers decided to switch the equations solver, or there is a real bug. I hope that one of the developers will reply in this forum. I will try to look at it also, since it is a core component in the linear algebra package. Also if you suspect that your work-around is useful --- please post it here! Nadav Rob Hooft wrote:
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
Nadav Horesh wrote:
The workaround is to use "generalized_inverse" instead of "solve_linear_equations". The changes in the latter routine since 17.1.2 are: @@ -269,18 +408,37 @@ bstar = Numeric.zeros((ldb,n_rhs),t) bstar[:b.shape[0],:n_rhs] = copy.copy(b) a,bstar = _castCopyAndTranspose(t, a, bstar) - lwork = 8*min(n,m) + max([2*min(m,n),max(m,n),n_rhs]) s = Numeric.zeros((min(m,n),),real_t) - work = Numeric.zeros((lwork,), t) + nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) + iwork = Numeric.zeros((3*min(m,n)*nlvl+11*min(m,n),), 'l') if _array_kind[t] == 1: # Complex routines take different arguments - lapack_routine = lapack_lite.zgelss - rwork = Numeric.zeros((5*min(m,n)-1,), real_t) + lapack_routine = lapack_lite.zgelsd + lwork = 1 + rwork = Numeric.zeros((lwork,), real_t) + work = Numeric.zeros((lwork,),t) results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, - 0,work,lwork,rwork,0 ) + 0,work,-1,rwork,iwork,0 ) + lwork = int(abs(work[0])) + rwork = Numeric.zeros((lwork,),real_t) + a_real = Numeric.zeros((m,n),real_t) + bstar_real = Numeric.zeros((ldb,n_rhs,),real_t) + results = lapack_lite.dgelsd( m, n, n_rhs, a_real, m, bstar_real,ldb , s, rcond, + 0,rwork,-1,iwork,0 ) + lrwork = int(rwork[0]) + work = Numeric.zeros((lwork,), t) + rwork = Numeric.zeros((lrwork,), real_t) + results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, + 0,work,lwork,rwork,iwork,0 ) else: - lapack_routine = lapack_lite.dgelss + lapack_routine = lapack_lite.dgelsd + lwork = 1 + work = Numeric.zeros((lwork,), t) + results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, + 0,work,-1,iwork,0 ) + lwork = int(work[0]) + work = Numeric.zeros((lwork,), t) results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, - 0,work,lwork,0 ) + 0,work,lwork,iwork,0 ) if results['info'] > 0: raise LinAlgError, 'SVD did not converge in Linear Least Squares' resids = Numeric.array([],t) I'm not deep enough into this to know where the new version goes wrong. Regards, Rob Hooft -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/
![](https://secure.gravatar.com/avatar/d7170424cd96b9f2ac01a8a26acac51e.jpg?s=120&d=mm&r=g)
On Sun, 02 Nov 2003 17:25:34 +0100 Rob Hooft wrote: Rob> - lapack_routine = lapack_lite.dgelss Rob> + lapack_routine = lapack_lite.dgelsd Well, here the underlying LAPACK routine was changed to the newer and significantly faster divide-and-conquer routine. (Same holds for complex version.) This could be the problem, which you should test. See LAPACK documentation for details. Nevertheless I would advise against reversing that change, as performance diffferences can really be large (although I haven't used either one of these specific functions here). Maybe you can keep a copy of the old version in your own project if really necessary? (After all there seems to be some agreement that you were just "lucky" to find a working algorithm in the first place.) Greetings, Jochen -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Liberté, Égalité, Fraternité GnuPG key: CC1B0B4D (Part 3 you find in my messages before fall 2003.)
![](https://secure.gravatar.com/avatar/a53ea657e812241a1162060860f698c4.jpg?s=120&d=mm&r=g)
On Sunday 02 November 2003 14:21, Nadav Horesh wrote:
The polynomial fit is indeed simple, and the routine from ScientificPython that Rob uses is only 20 lines long, most of that for error checking and setting up the arrays describing the system of linear equations. Looking at the singular values in Rob's problem, I see no evidence for the problem being particularly unstable. The singular values range from 1e-6 to 1, that should not pose any problem at double precision. Moreover, for a lower-order fit that gives reasonable results, the range is only slightly smaller. So I do suspect that something goes wrong in linear_least_squares. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen@cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/7dd64dbe40f809b9f8276bde56c872fd.jpg?s=120&d=mm&r=g)
The condition of the matrix is about 1.0E7 and its dimensions are 10000x36: This is not a stable linear system, at least not for a simple solvers. Thus, I estimate that the solver is not of a high quality, but not buggy either. But the solution to the polynomial fit turns to be much simpler: In the "fitPolynomial" function the 5th and the 4th lines before the end are commented. These lines uses the "generalized_inverse" procedure to solve the set of equations. just uncomment these lines and comment the two lines the follows, thats it. The solution to the 5x5 fit now seems OK at the first glance. Nadav. On Mon, 2003-11-03 at 15:47, Konrad Hinsen wrote:
![](https://secure.gravatar.com/avatar/a53ea657e812241a1162060860f698c4.jpg?s=120&d=mm&r=g)
On Tuesday 04 November 2003 12:19, Nadav Horesh wrote:
The condition of the matrix is about 1.0E7 and its dimensions are 10000x36: This is not a stable linear system, at least not for a simple
You can cut down the number of data points to much less (I use 400) and the problem persists.
That is exactly what I recommended Rob to do as well. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen@cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/5c85708f2eed0869671a7d303ca55b85.jpg?s=120&d=mm&r=g)
On Fri, Oct 31, 2003 at 02:19:54PM +0100, Rob W.W. Hooft wrote:
Works for me: (4,4) [[ 0.00001848 -0. 0. -0. 0. ] [ 0.99942297 0.03 -0. 0. -0. ] [ 0.10389946 -0. 0. -0. 0. ] [-0.00940275 0. -0. 0. -0. ] [ 0.01803527 -0.000111 0.00080066 -0.00217267 0.002475 ]] (5,5) [[-0.00000175 -0. 0. 0. -0. 0. ] [ 1.00008231 0.03 -0. 0. -0. 0. ] [ 0.09914353 -0. 0. -0. 0. -0. ] [ 0.00350289 0. -0. 0. -0. 0. ] [ 0.00333036 -0. 0. -0. 0. 0.001 ] [ 0.00594 0. -0. 0. -0. 0. ]] (6,6) [[ 0. -0. 0. -0. 0. -0. 0. ] [ 1. 0.03 -0. 0. -0. 0. -0. ] [ 0.1 -0. 0. -0. 0. -0. 0. ] [-0. 0. -0. 0. -0. 0. -0. ] [ 0.01 -0. 0. -0. 0. 0.001 0. ] [-0. 0. -0. 0. -0. 0. -0. ] [ 0.002 -0. 0. -0. 0. -0. 0. ]] (I've set sys.float_output_suppress_small to True to get a better picture.) I'm using Numeric 23.1, ScientificPython 2.4.3, and Python 2.3.2, from Debian unstable. Numeric is compiled to use the system LAPACK libraries (using ATLAS for the BLAS). This is on both Athlon and PowerPC chips. How did you compile Numeric? With or without the system LAPACK libraries? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm@physics.mcmaster.ca
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
David M. Cooke wrote:
How did you compile Numeric? With or without the system LAPACK libraries?
I'm probably like 90% of the other lazy people out there: using lapack_lite as coming with the Numeric package. Regards, Rob Hooft -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/
![](https://secure.gravatar.com/avatar/7dd64dbe40f809b9f8276bde56c872fd.jpg?s=120&d=mm&r=g)
I tested the problem with: 1. Numeric 23.1 under python 2.3.2 2. numarray 0.8 (I made a copy of the Scientific package where all calls to Numeric were replaced to numarray), under python 2.3.2 There results where about the same -- high coefficients for the 5th order polynomials. I would expect reliable fit for a high order polynomials only under very special circumstances, so this is not a big surprise. My advice is: * Make sure that this is a bug and not a result of a numerical instability. If you can trace it down and point to a bug, then report it. The numarray package is very usable and is under a very active and rapid development, thus bugs are being fixed fast. * Look for a solution in the scipy package: It is generally better then Scientific. * Polynomials fit is relatively very simple --- you may write one of you own in less then a one day work. Since, as I said, the problem is, in many cases, unstable, you'll have the chance to implement more stable linear-equation solvers. Nadav. On Fri, 2003-10-31 at 15:19, Rob W.W. Hooft wrote:
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
Nadav Horesh wrote:
Thanks for your efforts. The polynomial we're trying to fit here is not extremely unstable. As I said, with Numeric 17.1.2 my class of problems used to be stable up to at least 9th order. I really suspect a bug was introduced here which is difficult to pinpoint because everybody reacts the natural way: this is an intrinsicly unstable problem, so it is not unexpected. Somehow it could have been better, though! I managed to work around the problem so far by using a different solver also built in to Scientific Python, so I am saved so far. Regards, Rob -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/
![](https://secure.gravatar.com/avatar/7dd64dbe40f809b9f8276bde56c872fd.jpg?s=120&d=mm&r=g)
Many unstable problems have a stable solution if you choose the right algorithm. The question is if somewhere the developers decided to switch the equations solver, or there is a real bug. I hope that one of the developers will reply in this forum. I will try to look at it also, since it is a core component in the linear algebra package. Also if you suspect that your work-around is useful --- please post it here! Nadav Rob Hooft wrote:
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
Nadav Horesh wrote:
The workaround is to use "generalized_inverse" instead of "solve_linear_equations". The changes in the latter routine since 17.1.2 are: @@ -269,18 +408,37 @@ bstar = Numeric.zeros((ldb,n_rhs),t) bstar[:b.shape[0],:n_rhs] = copy.copy(b) a,bstar = _castCopyAndTranspose(t, a, bstar) - lwork = 8*min(n,m) + max([2*min(m,n),max(m,n),n_rhs]) s = Numeric.zeros((min(m,n),),real_t) - work = Numeric.zeros((lwork,), t) + nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) + iwork = Numeric.zeros((3*min(m,n)*nlvl+11*min(m,n),), 'l') if _array_kind[t] == 1: # Complex routines take different arguments - lapack_routine = lapack_lite.zgelss - rwork = Numeric.zeros((5*min(m,n)-1,), real_t) + lapack_routine = lapack_lite.zgelsd + lwork = 1 + rwork = Numeric.zeros((lwork,), real_t) + work = Numeric.zeros((lwork,),t) results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, - 0,work,lwork,rwork,0 ) + 0,work,-1,rwork,iwork,0 ) + lwork = int(abs(work[0])) + rwork = Numeric.zeros((lwork,),real_t) + a_real = Numeric.zeros((m,n),real_t) + bstar_real = Numeric.zeros((ldb,n_rhs,),real_t) + results = lapack_lite.dgelsd( m, n, n_rhs, a_real, m, bstar_real,ldb , s, rcond, + 0,rwork,-1,iwork,0 ) + lrwork = int(rwork[0]) + work = Numeric.zeros((lwork,), t) + rwork = Numeric.zeros((lrwork,), real_t) + results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, + 0,work,lwork,rwork,iwork,0 ) else: - lapack_routine = lapack_lite.dgelss + lapack_routine = lapack_lite.dgelsd + lwork = 1 + work = Numeric.zeros((lwork,), t) + results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, + 0,work,-1,iwork,0 ) + lwork = int(work[0]) + work = Numeric.zeros((lwork,), t) results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, - 0,work,lwork,0 ) + 0,work,lwork,iwork,0 ) if results['info'] > 0: raise LinAlgError, 'SVD did not converge in Linear Least Squares' resids = Numeric.array([],t) I'm not deep enough into this to know where the new version goes wrong. Regards, Rob Hooft -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/
![](https://secure.gravatar.com/avatar/d7170424cd96b9f2ac01a8a26acac51e.jpg?s=120&d=mm&r=g)
On Sun, 02 Nov 2003 17:25:34 +0100 Rob Hooft wrote: Rob> - lapack_routine = lapack_lite.dgelss Rob> + lapack_routine = lapack_lite.dgelsd Well, here the underlying LAPACK routine was changed to the newer and significantly faster divide-and-conquer routine. (Same holds for complex version.) This could be the problem, which you should test. See LAPACK documentation for details. Nevertheless I would advise against reversing that change, as performance diffferences can really be large (although I haven't used either one of these specific functions here). Maybe you can keep a copy of the old version in your own project if really necessary? (After all there seems to be some agreement that you were just "lucky" to find a working algorithm in the first place.) Greetings, Jochen -- Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Liberté, Égalité, Fraternité GnuPG key: CC1B0B4D (Part 3 you find in my messages before fall 2003.)
![](https://secure.gravatar.com/avatar/a53ea657e812241a1162060860f698c4.jpg?s=120&d=mm&r=g)
On Sunday 02 November 2003 14:21, Nadav Horesh wrote:
The polynomial fit is indeed simple, and the routine from ScientificPython that Rob uses is only 20 lines long, most of that for error checking and setting up the arrays describing the system of linear equations. Looking at the singular values in Rob's problem, I see no evidence for the problem being particularly unstable. The singular values range from 1e-6 to 1, that should not pose any problem at double precision. Moreover, for a lower-order fit that gives reasonable results, the range is only slightly smaller. So I do suspect that something goes wrong in linear_least_squares. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen@cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/7dd64dbe40f809b9f8276bde56c872fd.jpg?s=120&d=mm&r=g)
The condition of the matrix is about 1.0E7 and its dimensions are 10000x36: This is not a stable linear system, at least not for a simple solvers. Thus, I estimate that the solver is not of a high quality, but not buggy either. But the solution to the polynomial fit turns to be much simpler: In the "fitPolynomial" function the 5th and the 4th lines before the end are commented. These lines uses the "generalized_inverse" procedure to solve the set of equations. just uncomment these lines and comment the two lines the follows, thats it. The solution to the 5x5 fit now seems OK at the first glance. Nadav. On Mon, 2003-11-03 at 15:47, Konrad Hinsen wrote:
![](https://secure.gravatar.com/avatar/a53ea657e812241a1162060860f698c4.jpg?s=120&d=mm&r=g)
On Tuesday 04 November 2003 12:19, Nadav Horesh wrote:
The condition of the matrix is about 1.0E7 and its dimensions are 10000x36: This is not a stable linear system, at least not for a simple
You can cut down the number of data points to much less (I use 400) and the problem persists.
That is exactly what I recommended Rob to do as well. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen@cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------------
![](https://secure.gravatar.com/avatar/5c85708f2eed0869671a7d303ca55b85.jpg?s=120&d=mm&r=g)
On Fri, Oct 31, 2003 at 02:19:54PM +0100, Rob W.W. Hooft wrote:
Works for me: (4,4) [[ 0.00001848 -0. 0. -0. 0. ] [ 0.99942297 0.03 -0. 0. -0. ] [ 0.10389946 -0. 0. -0. 0. ] [-0.00940275 0. -0. 0. -0. ] [ 0.01803527 -0.000111 0.00080066 -0.00217267 0.002475 ]] (5,5) [[-0.00000175 -0. 0. 0. -0. 0. ] [ 1.00008231 0.03 -0. 0. -0. 0. ] [ 0.09914353 -0. 0. -0. 0. -0. ] [ 0.00350289 0. -0. 0. -0. 0. ] [ 0.00333036 -0. 0. -0. 0. 0.001 ] [ 0.00594 0. -0. 0. -0. 0. ]] (6,6) [[ 0. -0. 0. -0. 0. -0. 0. ] [ 1. 0.03 -0. 0. -0. 0. -0. ] [ 0.1 -0. 0. -0. 0. -0. 0. ] [-0. 0. -0. 0. -0. 0. -0. ] [ 0.01 -0. 0. -0. 0. 0.001 0. ] [-0. 0. -0. 0. -0. 0. -0. ] [ 0.002 -0. 0. -0. 0. -0. 0. ]] (I've set sys.float_output_suppress_small to True to get a better picture.) I'm using Numeric 23.1, ScientificPython 2.4.3, and Python 2.3.2, from Debian unstable. Numeric is compiled to use the system LAPACK libraries (using ATLAS for the BLAS). This is on both Athlon and PowerPC chips. How did you compile Numeric? With or without the system LAPACK libraries? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm@physics.mcmaster.ca
![](https://secure.gravatar.com/avatar/802b07155a8ab3996b825eca778f770e.jpg?s=120&d=mm&r=g)
David M. Cooke wrote:
How did you compile Numeric? With or without the system LAPACK libraries?
I'm probably like 90% of the other lazy people out there: using lapack_lite as coming with the Numeric package. Regards, Rob Hooft -- Rob W.W. Hooft || rob@hooft.net || http://www.hooft.net/people/rob/
participants (7)
-
David M. Cooke
-
Jochen Küpper
-
Konrad Hinsen
-
Nadav Horesh
-
Nadav Horesh
-
Rob Hooft
-
Rob W.W. Hooft