![](https://secure.gravatar.com/avatar/b450576bf795fc18adffd65a83fa0117.jpg?s=120&d=mm&r=g)
Shawn, I extensively use scipy.optimize.fmin_l_bfgs_b, and I always explicitly normalize the input before passing it to the optimizer. I usually do it by writing my function as g(x,norm) == f([x[0]*norm[0],x[1]*norm[1],...]), and pass it to the optimizer as fmin_l_bfgs_b(func=g,x0=[1.,1.],args=(norm)). Note that you can achieve rigorous convergence by multiplying norm by the result of optimization and iterating, but convergence behavior far from a minimum may highly depend both on what you choose as your initial guess and what your initial normalization factor is. Eric On 1/26/2014 3:01 PM, Yuxiang Wang wrote:
Dear all,
During optimization, it is often helpful to normalize the input parameters to make them on the same order of magnitude, so the convergence can be much better. For example, if we want to minimize f(x), while a reasonable approximation is x0=[1e3, 1e-4], it might be helpful to normalize x0[0] and x0[1] to about the same order of magnitude (often O(1)).
My question is, I have been using scipy.optimize and specifically, the L-BFGS-B algorithm. I was wondering that, do I need to normalize that manually by writing a function, or the algorithm already did it for me?
Thank you!
-Shawn