SLSQP Constrained Optimizer Status

I'm currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. The Fortran code being wrapped with F2PY is here: http://www.netlib.org/toms/733 (its use within Scipy has been cleared) SLSQP provides for bounds on the independent variables, as well as equality and inequality constraint functions, which is a capability that doesn't exist in scipy.optimize. Currently the code works, although the constraint normals are being generated by approximation. I'm working on a way to pass these in. I think the most elegant way will be a single function that returns the matrix of constraint normals. For a demonstration of what the code can do, here is an optimization of f(x,y) = 2xy + 2x - x**2 - 2y**2 Example 14.2 in Chapra & Canale gives the maximum as x=2.0, y=1.0. The unbounded optimization tests find this solution. As expected, its faster when derivatives are provided rather than approximated. Unbounded optimization. Derivatives approximated. Elapsed time: 1.45792961121 ms Results [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.'] Unbounded optimization. Derivatives provided. Elapsed time: 1.03211402893 ms Results [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.'] The following example uses an equality constraint to find the optimal when x=y. Bound optimization. Derivatives approximated. Elapsed time: 1.384973526 ms Results [[0.99999996845920858, 0.99999996845920858], -0.99999999999999889, 4, 0, 'Optimization terminated successfully.'] I've tried to conform to the syntax used by the other optimizers in scipy.optimize. The function definition and doc string are below. If anyone is interested in testing it out, let me know. def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_cons=None,args = (), iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, epsilon = _epsilon ): """ Minimize a function using Sequential Least SQuares Programming Description: Python interface function for the SLSQP Optimization subroutine originally implemented by Dieter Kraft. Inputs: func - Objective function (in the form func(x, *args)) x0 - Initial guess for the independent variable(s). eqcons - A list of functions of length n such that eqcons[j](x0,*args) == 0.0 in a successfully optimized problem f_eqcons - A function of the form f_eqcons(x, *args) that returns an array in which each element must equal 0.0 in a successfully optimized problem. If f_eqcons is specified, eqcons is ignored. ieqcons - A list of functions of length n such that ieqcons[j](x0,*args) >= 0.0 in a successfully optimized problem f_ieqcons - A function of the form f_ieqcons(x0, *args) that returns an array in which each element must be greater or equal to 0.0 in a successfully optimized problem. If f_ieqcons is specified, ieqcons is ignored. bounds - A list of tuples specifying the lower and upper bound for each independent variable [ (xl0, xu0), (xl1, xu1), ...] fprime - A function that evaluates the partial derivatives of func fprime_cons - A function of the form f(x, *args) that returns the m by n array of constraint normals. If not provided, the normals will be approximated. Equality constraint normals precede inequality constraint normals. args - A sequence of additional arguments passed to func and fprime iter - The maximum number of iterations (int) acc - Requested accuracy (float) iprint - The verbosity of fmin_slsqp. iprint <= 0 : Silent operation iprint == 1 : Print summary upon completion (default) iprint >= 2 : Print status of each iterate and summary full_output - If 0, return only the minimizer of func (default). Otherwise, output final objective function and summary information. epsilon - The step size for finite-difference derivative estimates. Outputs: ( x, { fx, gnorm, its, imode, smode }) x - The final minimizer of func. fx - The final value of the objective function. its - The number of iterations. imode - The exit mode from the optimizer, as an integer. smode - A string describing the exit mode from the optimizer. Exit modes are defined as follows: -1 : Gradient evaluation required (g & a) 0 : Optimization terminated successfully. 1 : Function evaluation required (f & c) 2 : Number of equality constraints larger than number of independent variables 3 : More than 3*n iterations in LSQ subproblem 4 : Inequality constraints incompatible 5 : Singular matrix E in LSQ subproblem 6 : Singular matrix C in LSQ subproblem 7 : Rank-deficient equality constraint subproblem HFTI 8 : Positive directional derivative for linesearch 9 : Iteration limit exceeded -- - Rob Falck

Rob Falck wrote:
I'm currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. The Fortran code being wrapped with F2PY is here: http://www.netlib.org/toms/733 (its use within Scipy has been cleared)
If anyone is interested in testing it out, let me know.
Hi Rob, could you commit your changes to svn? I intend to provide connection to OpenOpt and try to bench the solver with other ones available. Also, having outputfcn would be nice, does the Fortran code provide the possibility? Regards, D.

outputfcn would allow for graphical progress of the optimizer, correct? I don't see any reason why that wouldn't be possible. I'll need to read up on properly incorporating it into scipy for svn (this is the first commit I've done for the project), I'll try to have it up in the next day or three. I think its speed can be increased substantially with the addition of a constrain normals argument, but I'll add it as is, with approximations for now. On Dec 12, 2007 3:42 AM, dmitrey <dmitrey.kroshko@scipy.org> wrote:
Rob Falck wrote:
I'm currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. The Fortran code being wrapped with F2PY is here: http://www.netlib.org/toms/733 (its use within Scipy has been cleared)
If anyone is interested in testing it out, let me know.
Hi Rob, could you commit your changes to svn? I intend to provide connection to OpenOpt and try to bench the solver with other ones available. Also, having outputfcn would be nice, does the Fortran code provide the possibility? Regards, D.
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev
-- - Rob Falck

Could you provide at least possibility for user-supplied gradient of objective function? This feature would provide possibility of using p.connectIterFcn('df') (as well as it is done for ALGENCAN and some other solvers) so openopt graphic output would be enabled. Regards, D. Rob Falck wrote:
outputfcn would allow for graphical progress of the optimizer, correct? I don't see any reason why that wouldn't be possible. I'll need to read up on properly incorporating it into scipy for svn (this is the first commit I've done for the project), I'll try to have it up in the next day or three. I think its speed can be increased substantially with the addition of a constrain normals argument, but I'll add it as is, with approximations for now.
On Dec 12, 2007 3:42 AM, dmitrey < dmitrey.kroshko@scipy.org <mailto:dmitrey.kroshko@scipy.org>> wrote:
Rob Falck wrote: > I'm currently implementing the Sequential Least Squares Quadratic > Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. > The Fortran code being wrapped with F2PY is here: > http://www.netlib.org/toms/733 (its use within Scipy has been cleared) >
> If anyone is interested in testing it out, let me know. > Hi Rob, could you commit your changes to svn? I intend to provide connection to OpenOpt and try to bench the solver with other ones available. Also, having outputfcn would be nice, does the Fortran code provide the possibility? Regards, D.
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org <mailto:Scipy-dev@scipy.org> http://projects.scipy.org/mailman/listinfo/scipy-dev
-- - Rob Falck ------------------------------------------------------------------------
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev

Yes, thats already implemented. On Dec 12, 2007 10:22 AM, dmitrey <dmitrey.kroshko@scipy.org> wrote:
Could you provide at least possibility for user-supplied gradient of objective function? This feature would provide possibility of using p.connectIterFcn('df') (as well as it is done for ALGENCAN and some other solvers) so openopt graphic output would be enabled.
Regards, D.
Rob Falck wrote:
outputfcn would allow for graphical progress of the optimizer, correct? I don't see any reason why that wouldn't be possible. I'll need to read up on properly incorporating it into scipy for svn (this is the first commit I've done for the project), I'll try to have it up in the next day or three. I think its speed can be increased substantially with the addition of a constrain normals argument, but I'll add it as is, with approximations for now.
On Dec 12, 2007 3:42 AM, dmitrey < dmitrey.kroshko@scipy.org <mailto:dmitrey.kroshko@scipy.org>> wrote:
Rob Falck wrote: > I'm currently implementing the Sequential Least Squares Quadratic > Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. > The Fortran code being wrapped with F2PY is here: > http://www.netlib.org/toms/733 (its use within Scipy has been cleared) >
> If anyone is interested in testing it out, let me know. > Hi Rob, could you commit your changes to svn? I intend to provide connection to OpenOpt and try to bench the solver with other ones available. Also, having outputfcn would be nice, does the Fortran code provide the possibility? Regards, D.
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org <mailto:Scipy-dev@scipy.org> http://projects.scipy.org/mailman/listinfo/scipy-dev
-- - Rob Falck ------------------------------------------------------------------------
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev
-- - Rob Falck

The source files for SLSQP have been submitted to Trac with Ticket #566 (I lack svn commit privileges), along with a short test script as an example. Feel free to test it and let me know what you think.
Hi Rob, could you commit your changes to svn? I intend to provide connection to OpenOpt and try to bench the solver with other ones available. Also, having outputfcn would be nice, does the Fortran code provide the possibility? Regards, D.
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev

Hi Rob, could you check the line 224 (slsqp.py) a = numpy.concatenate( (fprime_cons(x),zeros([la,1])),1) I found the problem is here (lines 163-165): meq = len(eqcons) # meq = The number of equality constraints m = meq + len(ieqcons) # m = The total number of constraints la = array([1,m]).max() # la = So when user pass for example f_eqcons instead of eqcons, there is no appropriate handling of the situation in the code above these lines, so it produces la = 1 and error in concatenate. I will try to continue connecting slsqp to OpenOpt after you'll inform about fixing the bug, ok? Regards, D.

Thanks for the heads up. Passing the constraints via f_eqcons and f_ieqcons needs more work. I'll get on that asap. On Dec 18, 2007 3:44 PM, dmitrey <dmitrey.kroshko@scipy.org> wrote:
Hi Rob, could you check the line 224 (slsqp.py)
a = numpy.concatenate( (fprime_cons(x),zeros([la,1])),1)
I found the problem is here (lines 163-165): meq = len(eqcons) # meq = The number of equality constraints m = meq + len(ieqcons) # m = The total number of constraints la = array([1,m]).max() # la =
So when user pass for example f_eqcons instead of eqcons, there is no appropriate handling of the situation in the code above these lines, so it produces la = 1 and error in concatenate.
I will try to continue connecting slsqp to OpenOpt after you'll inform about fixing the bug, ok?
Regards, D. _______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev
-- - Rob Falck

I've submitted a ticket #570 that includes an updated version of slsqp.py. The function fmin_slsqp should now properly accept constraints via the callable arguments f_eqcons and f_ieqcons. In addition, callable arguments that provide the constraint jacobians are also working now. I also added a function to slsqp.py for approximating the Jacobians if fprime_eqcons or fprime_ieqcons are not provided. Perhaps this routine would be better off in scipy.common, since its very similar to approx_fprime. I also attached a new version of slsqp_test.py to the ticket that now shows examples of how to use fmin_slsqp with the new constraint arguments. Please let me know if you have any questions or uncover any more issues. http://scipy.org/scipy/scipy/ticket/570 - Rob Falck

P.S. I have tried passing ieqcons, eqcons either both Python lists or both numpy arrays, it doesn't work as well (same error I have mentioned) (in the code I sent 1st is Python lists, 2nd is array) Regards, D.

Thanks for the catch in the docstring. I've fixed that on the version of slsqp.py attached to the ticket. I'm not positive this is the correct minimum, but it definitely is trending towards this result when that error in encountered. This appears to be a case where SLSQP just can't quite zero in on the minimum. I know its a kludge, but lowering the requested accuracy to 1.0E-5yields a converged solution, within 3 or 4 decimal places of what appears to be the true minimum. Does this appear to be yielding proper results? from numpy import * from scipy.optimize.slsqp import fmin_slsqp N = 10 M = 5 ff = lambda x: (abs(x-M) ** 1.5).sum() x0 = cos(arange(N)) print "Initial guess", x0 c = lambda x: asfarray([2* x[0] **4-32, x[1]**2+x[2]**2 - 8]) h1 = lambda x: 1e1*(x[-1]-1)**4 h2 = lambda x: (x[-2]-1.5)**4 #TODO: pass bounds to fmin_slsqp when all other will work ok bounds = [(-6,6)]*10 bounds[3] = (5.5, 6) #bounds[4] = (6, 4.5) diffInt = 1e-8 # Unconstrained print "Unconstrainted" x = fmin_slsqp( ff, x0, epsilon = diffInt, bounds=bounds, iprint=1, acc= 1.0E-12) print x print "\n" # Inequality constraints print "Inequality constraints" x = fmin_slsqp( ff, x0, epsilon = diffInt, iprint=1, acc=1.0E-12, f_ieqcons = c, bounds=bounds ) print x print "\n" h1 = lambda x: 1e1*(x[-1]-1)**4 h2 = lambda x: (x[-2]-1.5)**4 # Inequality and Equality constraints print "Inequality and Equality constraints" x = fmin_slsqp( ff, x0, epsilon = diffInt, iprint=1, acc=1.0E-12, f_eqcons=lambda x: asfarray([h1(x), h2(x)]), f_ieqcons = c, bounds=bounds ) print "Solution vector", x print "Equality constraint values:",h1(x), h2(x) print "Inequality constraint values:", c(x) print "\n" # Inequality and Equality constraints - lower requested accuracy print "Inequality and Equality constraints - lower requested accuracy" x = fmin_slsqp( ff, x0, epsilon = diffInt, iprint=1, acc=1.0E-5, f_eqcons=lambda x: asfarray([h1(x), h2(x)]), f_ieqcons = c, bounds=bounds ) print "Solution vector", x print "Equality constraint values:",h1(x), h2(x) print "Inequality constraint values:", c(x) -- - Rob Falck

Also, your bounds[4] entry is backwards. This results in a nasty glibc error on my system. I'll add the logic to test for this later today.

hi Rob, seems like all is working correctly for now. Don't you mind me to commit your changes from ticket to svn repository? Regards, D.

I was granted commit privileges, and have already submitted these changes. I'm glad things are working well for you. If you have any comparisons with other OpenOpt solvers that you could share, I would appreciate seeing them. On Dec 24, 2007 11:53 AM, dmitrey <dmitrey.kroshko@scipy.org> wrote:
hi Rob, seems like all is working correctly for now. Don't you mind me to commit your changes from ticket to svn repository? Regards, D.
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev
-- - Rob Falck

The problem is that it don't pass those my bench1 & bench2 I had used (but pass more simple examples). + sometimes it stops not far from argminimum and says "Singular matrix C in LSQ subproblem". You can try it by yourself (using the problems you deal with), I have already committed related changes to OO. Regards, D. Rob Falck wrote:
I was granted commit privileges, and have already submitted these changes. I'm glad things are working well for you. If you have any comparisons with other OpenOpt solvers that you could share, I would appreciate seeing them.
On Dec 24, 2007 11:53 AM, dmitrey <dmitrey.kroshko@scipy.org <mailto:dmitrey.kroshko@scipy.org>> wrote:
hi Rob, seems like all is working correctly for now. Don't you mind me to commit your changes from ticket to svn repository? Regards, D.
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org <mailto:Scipy-dev@scipy.org> http://projects.scipy.org/mailman/listinfo/scipy-dev <http://projects.scipy.org/mailman/listinfo/scipy-dev>
-- - Rob Falck ------------------------------------------------------------------------
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev

Rob Falck <robfalck <at> gmail.com> writes: > > I'm currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy.The Fortran code being wrapped with F2PY is here: > http://www.netlib.org/toms/733 (its use within Scipy has been cleared)SLSQP provides for bounds on the independent variables, as well as equality and inequality constraint functions, which is a capability that doesn't exist in > scipy.optimize.Currently the code works, although the constraint normals are being generated by approximation. I'm working on a way to pass these in. I think the most elegant way will be a single function that returns the matrix of constraint normals. > For a demonstration of what the code can do, here is an optimization of f(x,y) = 2xy + 2x - x**2 - 2y**2Example 14.2 in Chapra & Canale gives the maximum as x=2.0, y=1.0.The unbounded optimization tests find this solution. As expected, its faster when derivatives are provided rather than approximated. > Unbounded optimization. Derivatives approximated.Elapsed time: 1.45792961121 msResults [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.'] > Unbounded optimization. Derivatives provided.Elapsed time: 1.03211402893 msResults [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.']The following example uses an equality constraint to find the optimal when x=y. Bound optimization. Derivatives approximated.Elapsed time: 1.384973526 msResults [[0.99999996845920858, 0.99999996845920858 > ], -0.99999999999999889, 4, 0, 'Optimization terminated successfully.']I've tried to conform to the syntax used by the other optimizers in scipy.optimize. The function definition and doc string are below. > If anyone is interested in testing it out, let me know.def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_cons=None,args = (), > iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, epsilon = _epsilon ): """ Minimize a function using Sequential Least SQuares Programming > Description: Python interface function for the SLSQP Optimization subroutine originally implemented by Dieter Kraft. Inputs: func - Objective function (in the form func(x, *args)) > x0 - Initial guess for the independent variable(s). eqcons - A list of functions of length n such that eqcons[j](x0,*args) == 0.0 in a successfully optimized problem > f_eqcons - A function of the form f_eqcons(x, *args) that returns an array in which each element must equal 0.0 in a successfully optimized problem. If f_eqcons is > specified, eqcons is ignored. ieqcons - A list of functions of length n such that ieqcons[j](x0,*args) >= 0.0 in a successfully optimized problem > f_ieqcons - A function of the form f_ieqcons(x0, *args) that returns an array in which each element must be greater or equal to 0.0 in a successfully optimized problem. If f_ieqcons is > specified, ieqcons is ignored. bounds - A list of tuples specifying the lower and upper bound for each independent variable [ (xl0, xu0), (xl1, xu1), ...] > fprime - A function that evaluates the partial derivatives of func fprime_cons - A function of the form f(x, *args) that returns the m by n array of constraint normals. If not provided, > the normals will be approximated. Equality constraint normals precede inequality constraint normals. args - A sequence of additional arguments passed to func and fprime > iter - The maximum number of iterations (int) acc - Requested accuracy (float) iprint - The verbosity of fmin_slsqp. iprint <= 0 : Silent operation > iprint == 1 : Print summary upon completion (default) iprint >= 2 : Print status of each iterate and summary full_output - If 0, return only the minimizer of func (default). Otherwise, > output final objective function and summary information. epsilon - The step size for finite-difference derivative estimates. Outputs: ( x, { fx, gnorm, its, imode, smode }) > x - The final minimizer of func. fx - The final value of the objective function. its - The number of iterations. imode - The exit mode from the optimizer, as an integer. > smode - A string describing the exit mode from the optimizer. Exit modes are defined as follows: -1 : Gradient evaluation required (g & a) 0 : Optimization terminated successfully. > 1 : Function evaluation required (f & c) 2 : Number of equality constraints larger than number of independent variables 3 : More than 3*n iterations in LSQ subproblem 4 : Inequality constraints incompatible > 5 : Singular matrix E in LSQ subproblem 6 : Singular matrix C in LSQ subproblem 7 : Rank-deficient equality constraint subproblem HFTI 8 : Positive directional derivative for linesearch > 9 : Iteration limit exceeded-- - Rob Falck > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev <at> scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I have a question regarding the SLSQP Code. If inexact linesearch is used, which L1-penalty function is used? I´m looking forward to your answer, Monika

21.11.2013 18:59, Monika kirjoitti: [clip]
I have a question regarding the SLSQP Code. If inexact linesearch is used, which L1-penalty function is used?
The source code can be found in http://www.netlib.org/toms/733 -- Pauli Virtanen
participants (4)
-
dmitrey
-
Monika
-
Pauli Virtanen
-
Rob Falck