`optimize.minimize` offers a choice of many different methods for multivariate scalar minimisation. These methods are chosen using the `method` keyword.
There are also different global minimisation routines that one can use (differential_evolution, basinhopping, dual_annealing, shgo). These minimisers have the same overall objective as `minimize`, just with a different approach to finding a minimum. The global minimiser routines are called individually, and are not accessible through the `minimize` function as different methods. A PR is open at
https://github.com/scipy/scipy/pull/10778 which proposes to add a `differential-evolution` method to `minimize` that would permit this. This is a fairly straightforward change as the call interfaces are almost identical, and the problems are posed in similar ways.
There are obviously pros and cons to this:
Pros
------
- One could call any of the multivariate scalar minimizers through one function.
- In user code this could simplify code significantly (code that offers all the different minimizers has to use if/elif constructs to call different functions depending on the method to be used).
I sort of think these pros are overstated. The dispatching of what function to call does not seem that difficult to do (either in `minimize` or in user code). The benefit of having that dispatch of function name happen within `minimize` is small. Normalizing the APIs so that the options sent to the underlying methods is harder and also more valuable. That is, in order for the dispatching to really be valuable, it has to unite and offer a translation layer to the calls to the underlying functions.
The global solvers have many different optional arguments with little overlap in name or meaning. Like, 'popsize' is only used by 'differential_evolution'. The plan would have to be to silently ignore keyword arguments for concepts not used by the currently used method. I'm not sure that helps achieve clarity and simplicity. To use these methods, the user has to read the docs for the actual solver to get the many optional arguments set anyway. At that point, they can just as easily change the name of the function.
Cons
-------
- A user may not appreciate the differences of how local and global minimisers work. e.g. a lot of the global minimisers are stochastic and some use local minimisers to polish the end solution.
Could we have a discussion as to whether people think this is a good/bad idea? Would it confuse users, would it make `minimize` too convoluted, etc?
I don't think the distinction between "local" and "global" is actually that important. Well, actually, I think the label "global" is kind of misleading, as most of these methods require bounds. What they do is try to avoid getting stuck in the first minima they find.
But, I think there is another concern that may not have been expressed yet. `x0` is a required, positional argument for `minimize()`, as an array of initial parameter values. Most of the global optimizers in scipy.optimize do not use `x0`. Instead, they require bounds and explore the range of values between those bounds. Would `x0` be required AND ignored for these global optimizers?
Cheers,
--Matt