[SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad

Jason Rennie jrennie at gmail.com
Mon Oct 5 12:56:27 EDT 2009


The low-down:

   - "Warning: Desired error not necessarily achieved due to precision loss"
   - I'm passing objective (obj) and gradient (grad)
   - I checked that obj and grad are correct using my python equivalent of
   http://people.csail.mit.edu/jrennie/matlab/checkgrad2.m
   - I have the same problem whether I use norm=2 or no norm argument
   - Termination objective and 2-norm of grad are 2.484517e+06, 2.644732e+07
   - Subtracting grad*1e-10 to parameter vector yields 2.417658e+06,
   2.413900e+07 obj and 2-norm of grad, respectively

I did an implementation of CG in matlab/octave a few years ago and realize
that the problem could be as simple as me needing to set a different epsilon
value or some such.  Any suggestions?  Nothing jumped out at me when I gave
a careful read to the argument list and glanced over the code, but I could
easily be missing something.  My current call:

wopt = scipy.optimize.fmin_cg(f = ser.obj, fprime = ser.grad, x0 = w0, norm
= 2, callback = cb)

OTOH, is it possible that fmin_cg needs additional tuning?  I don't have
much understanding of how solid the fmin_cg code is.  Has it seen tons of
use/testing, or is it relatively fresh code?

FYI, I'm using 0.7.0---the version that comes with the current Ubuntu.  My
parameter vector is length 12; I have ~50 data points.  I've seen CG work
quite nicely on data of a million dimensions...

Thanks,

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20091005/8be38509/attachment.html>


More information about the SciPy-User mailing list