Conjugate gradients minimizer
imbosol-1048743253 at aerojockey.com
Thu Mar 27 07:16:50 CET 2003
Gareth McCaughan wrote:
> Carl Banks wrote:
>> But let me recommend something else. You have only a thousand
>> variables, which is really not a lot. Conjugate gradient is best
>> suited for systems of millions of variables, because it doesn't have
>> to store a matrix. It doesn't scale down well.
>> So, unless you have reason to believe conjugate-gradient is specially
>> suited to this problem (indeed, I have heard of an atomic structure
>> problem that was like that), or if you plan to scale up to DNA
>> crystals someday, use the BFGS method instead.
> I concur. If it turns out that the overhead from BFGS
> *is* too much for some reason, you might also consider
> something intermediate between conjugate gradient and
> BFGS, such as L-BFGS.
> But it's not true that BFGS requires inverting a matrix
> at each step. The matrix you maintain is an approximation
> to the *inverse* of the Hessian, and no inversion is
Oh, silly me. I probably said that but just intended to mean "you
have to store an NxN matrix." I never heard of L-BFGS, though. I
admit I got my optimization knowledge from 20-year-old texts.
More information about the Python-list