[SciPy-Dev] WIP: Class based Optimizers

Andrew Nelson andyfaff at gmail.com
Tue Feb 13 23:25:01 EST 2018


The iteration based nature will allow the interested user to keep all
history. But atm they'll have to do it themselves.
One suggestion was to create an optimisation runner to control how the
iteration runs through.

This is also a golden opportunity to change things on how we specify
bounds, etc.

On 14 Feb. 2018 15:01, "xoviat" <xoviat at gmail.com> wrote:

> I can use the MATLAB optimizers and solve_ivp as points of reference. When
> compared to MATLAB, the current interface stacks up pretty well. However,
> both MATLAB and `scipy.optimize` have a less-than-ideal interface for
> gathering optimization history.
>
>
>
> Perhaps it would be more useful to align `scipy.optimize` with
> `solve_ivp`. In both cases you’re calculating a new point using the history
> of the previous points (though of course the goal—and the formulas—are
> different). The difference in user interface means that `scipy.optimize`
> only returns the last point, whereas `solve_ivp` returns the entire
> solution history. It’s obvious why: a solution to a set of differential
> equations mathematically must span the domain of interest whereas an
> optimization result mathematically only consists of a single point. But
> there no reason, at least that I can tell, not to return all the points
> that we tried while running the optimizer. It would at least provide a much
> simpler way to understand what the optimizer is doing.
>
>
>
> *From: *Andrew Nelson <andyfaff at gmail.com>
> *Sent: *Tuesday, February 13, 2018 9:21 PM
> *To: *scipy-dev <scipy-dev at python.org>
> *Subject: *[SciPy-Dev] WIP: Class based Optimizers
>
>
>
> The #8414 PR on github (https://github.com/scipy/scipy/pull/8414)
> introduces an `Optimize` class for `scipy.optimize`. The `Optimize` class
> can be advanced like an iterator, or be advanced to convergence with the
> `solve` method.
>
> The PR is intended to provide a cleaner interface to the minimisation
> process, with the user being able to interact with the solver during
> iteration. This allows the variation of solver hyper-parameters, or other
> user definable interactions. As such `callback`s become redundant, as you
> can access all the optimizer information required.
>
> Other historical parameters may become redundant in these objects, such as
> `disp`. This is also a chance for a ground-zero approach to how people
> interact with optimizers.
>
>
>
> The PR is intended as a Request For Comment to gain feedback on
> architecture of the solvers. Given that it's a key part of infrastructure
> this may create a large amount of feedback, if I don't reply to it all bear
> in mind that I don't have unlimited time resource.
>
> In it's current state I've implemented `Optimizer`, `Function`, `LBFGS`
> and `NelderMead` classes. `LBFGS` and `NelderMead` are being called by the
> corresponding functions already in existence. The current test suite
> passes, but no unit tests have been written for the functionality. I'd
> prefer to add such a specific test suite once the overall architecture is
> settled.
>
>
>
> These optimisers were straightforward to translate. However, I don't know
> how this approach would translate to other minimisers, or things like
> `least_squares`. If a design could be agreed, then perhaps the best
> approach would be to tackle the remaining optimizers on a case by case
> basis once this PR is merged.
>
>
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at python.org
> https://mail.python.org/mailman/listinfo/scipy-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20180214/8425e269/attachment-0001.html>


More information about the SciPy-Dev mailing list