optimize.fmin_l_bfgs_b and 'ABNORMAL_TERMINATION_IN_LNSRCH',
I am using optimize.fmin_l_bfgs_b and getting the following output: (array([ 8142982.47310469, 0. , 614438.11725001]), 1.58474444864e+12, {'funcalls': 21, 'grad': array([ 0. , 952148.4375, 24414.0625]), 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'warnflag': 2}) What does this mean? I am trying to do a least squares curve fit between some noisy data and a nonlinear model. I need to constrain one of my fit parameters to be positive. I do not have an fprime function, so the gradient is being determined numerically. Is there a better routine to use? I get a fairly close answer using my own hacked solution of using optimize.fmin where the cost function adds a gigantic penalty to the cost if the middle coefficient is greater than 0: [ 8.16174249e+06 1.93528613e-10 6.14626152e+05] So, I don't think the answer is bad, I just want to know why it terminated abnormally and whether or not I can trust the result. Thanks, Ryan
On 10/29/07, Ryan Krauss <ryanlists@gmail.com> wrote:
I am using optimize.fmin_l_bfgs_b and getting the following output:
(array([ 8142982.47310469, 0. , 614438.11725001]), 1.58474444864e+12, {'funcalls': 21, 'grad': array([ 0. , 952148.4375, 24414.0625]), 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'warnflag': 2})
What does this mean? I am trying to do a least squares curve fit between some noisy data and a nonlinear model. I need to constrain one of my fit parameters to be positive. I do not have an fprime function, so the gradient is being determined numerically.
Is there a better routine to use?
I get a fairly close answer using my own hacked solution of using optimize.fmin where the cost function adds a gigantic penalty to the cost if the middle coefficient is greater than 0: [ 8.16174249e+06 1.93528613e-10 6.14626152e+05]
So, I don't think the answer is bad, I just want to know why it terminated abnormally and whether or not I can trust the result.
Is it possible that your numerical gradient be (very) inaccurate ? Dominique
Yes. On 10/29/07, Dominique Orban <dominique.orban@gmail.com> wrote:
On 10/29/07, Ryan Krauss <ryanlists@gmail.com> wrote:
I am using optimize.fmin_l_bfgs_b and getting the following output:
(array([ 8142982.47310469, 0. , 614438.11725001]), 1.58474444864e+12, {'funcalls': 21, 'grad': array([ 0. , 952148.4375, 24414.0625]), 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'warnflag': 2})
What does this mean? I am trying to do a least squares curve fit between some noisy data and a nonlinear model. I need to constrain one of my fit parameters to be positive. I do not have an fprime function, so the gradient is being determined numerically.
Is there a better routine to use?
I get a fairly close answer using my own hacked solution of using optimize.fmin where the cost function adds a gigantic penalty to the cost if the middle coefficient is greater than 0: [ 8.16174249e+06 1.93528613e-10 6.14626152e+05]
So, I don't think the answer is bad, I just want to know why it terminated abnormally and whether or not I can trust the result.
Is it possible that your numerical gradient be (very) inaccurate ?
Dominique _______________________________________________ SciPy-user mailing list SciPy-user@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user
Your variables differs in some orders, so you should use either use software with automatic scaling or do it by yourself: scalePhactor = array([1e6, 1e-10, 1e5]) x0 /= scalePhactor x_opt = a_solver(objfunc, x0,...) x_opt *= scalePhactor ######### def objfunc(x): x = x.copy() * scalePhactor .... ######### I intend to implement automatic scaling in scikits.openopt but I have no time for now (btw it's present in my MATLAB OpenOpt ver). Also, you could be interested in other OO solvers for your problem - ALGENCAN or lincher; connection to lbfgsb is provided as well. Regards, D. Ryan Krauss wrote:
Yes.
On 10/29/07, *Dominique Orban* <dominique.orban@gmail.com <mailto:dominique.orban@gmail.com>> wrote:
On 10/29/07, Ryan Krauss <ryanlists@gmail.com <mailto:ryanlists@gmail.com>> wrote: > I am using optimize.fmin_l_bfgs_b and getting the following output: > > (array([ 8142982.47310469, 0. , 614438.11725001]), > 1.58474444864e+12, > {'funcalls': 21, > 'grad': array([ 0. , 952148.4375, 24414.0625]), > 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', > 'warnflag': 2}) > > What does this mean? I am trying to do a least squares curve fit between > some noisy data and a nonlinear model. I need to constrain one of my fit > parameters to be positive. I do not have an fprime function, so the > gradient is being determined numerically. > > Is there a better routine to use? > > I get a fairly close answer using my own hacked solution of using > optimize.fmin where the cost function adds a gigantic penalty to the cost if > the middle coefficient is greater than 0: > [ 8.16174249e+06 1.93528613e-10 6.14626152e+05] > > So, I don't think the answer is bad, I just want to know why it terminated > abnormally and whether or not I can trust the result.
Is it possible that your numerical gradient be (very) inaccurate ?
Dominique _______________________________________________ SciPy-user mailing list SciPy-user@scipy.org <mailto:SciPy-user@scipy.org> http://projects.scipy.org/mailman/listinfo/scipy-user
------------------------------------------------------------------------
_______________________________________________ SciPy-user mailing list SciPy-user@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user
On 10/29/07, Dominique Orban <dominique.orban@gmail.com> wrote:
Is it possible that your numerical gradient be (very) inaccurate ?
On 10/29/07, Ryan Krauss <ryanlists@gmail.com> wrote:
Yes.
That may be it. LBFGS-B computes a search direction d by first solving a linear linear system of the form Bd = -g where B is some positive definite matrix and g is the gradient (approximated numerically in your case). Next, it performs a linesearch along d. For everything to work well, the angle between d and the exact gradient must be larger than 90 degrees, but most not come too close to 90 degrees. If it does, the linesearch may fail to identify an appropriate steplength. In typical situations, when g is the exact gradient, all is well as long as B is not too ill conditioned. When g is no longer exact, everything is possible. You may want to look into obtaining a more accurate approximation of your gradient. I hope this helps, Dominique
participants (3)
-
dmitrey
-
Dominique Orban
-
Ryan Krauss