[SciPy-User] tip (maybe): scaling and optimizers

josef.pktd at gmail.com josef.pktd at gmail.com
Thu Oct 25 23:10:51 EDT 2012


mainly an observation:

After figuring out that fmin_slsqp is scale sensitive, I switched to
normalizing, rescaling loglikelihood functions in statsmodels.
Loglikelihood functions are our main functions for nonlinear optimization.

Today I was working by accident on an older branch of statsmodels, and
the results I got with fmin_bfgs were awful.
After switching to statsmodels master, the results I get with
fmin_bfgs are much better (very good: robust and accurate).

The impression I got from this and from a discussion with Ian Langmore
(on an L1 penalized optimization pull request) is that many scipy
optimizers might be scale sensitive in the default settings.


Watch the scale of your objective function !?

(qualifier: I don't remember if other changes are in statsmodels
master and not in my old branch that make optimization more robust.)

Josef
------
"anecdotal evidence ain't proof"
http://editthis.info/logic/Informal_Fallacies#Anecdotal_Evidence
http://sayings.jacomac.de/details.php?id=10
( http://www.unilang.org/viewtopic.php?f=11&t=38585&start=0&st=0&sk=t&sd=a )
----



More information about the SciPy-User mailing list