[Tutor] constrained least square fitting using python

Oscar Benjamin oscar.j.benjamin at gmail.com
Sat Aug 10 17:19:07 CEST 2013


On 10 August 2013 08:18, eryksun <eryksun at gmail.com> wrote:
> On Fri, Aug 9, 2013 at 9:12 PM, Oscar Benjamin
> <oscar.j.benjamin at gmail.com> wrote:
>>
>> Which of the two solutions is more accurate in terms of satisfying the
>> constraint and minimising the objective function? Do they take similar
>> times to run?
>
> In my example, minimize() required 28 function evaluations, so your
> solution is more efficient.

Really what I meant is: do they both run in the blink of an eye. I'm
guessing they do since you haven't given a time in seconds.

> As far as minimizing the objective and satisfying the constraint, the
> results are very close:
>
>     >>> objective(pcons, ymeas, X)   # minimize
>     2036.6364327061785
>     >>> np.sum(pcons)
>     -2.0816681711721685e-17
>
>     >>> objective(plsqcon, ymeas, X) # analytic
>     2036.6364327060171
>     >>> np.sum(plsqcon)
>     0.0

That seems plenty good enough. It's impressive that the more general
SLSQP method can be so accurate.

> I think SLSQP is using something like gradient descent on successive
> quadratic approximations of the objective. But it's not something I
> know a lot about. Here's the paper and links to the FORTRAN source
> code:
>
> Dieter Kraft, "Algorithm 733: TOMP–Fortran modules for optimal control
> calculations," ACM Transactions on Mathematical Software, vol. 20, no.
> 3, pp. 262-281 (1994)
> http://doi.acm.org/10.1145/192115.192124
>
> http://www.netlib.org/toms/733
> https://github.com/scipy/scipy/blob/master/scipy/optimize/slsqp/slsqp_optmz.f

I just had a quick look at that code; I've decided I won't even try to
understand it!


Oscar


More information about the Tutor mailing list