Dear SciPy developers & users,
I have a couple of new derivative-free, global optimization algorithms I’ve been working on lately - plus some improvements to AMPGO and a few more benchmark functions - and I’d like to rerun the benchmarks as I did back in 2013 (!!!).
In doing so, I’d like to remove some of the least interesting/worst performing algorithms (Firefly, MLSL, Galileo, the original DE) and replace them with the ones currently available in SciPy - differential_evolution, SHGO and dual_annealing.
Everything seems good and dandy, but it appears to me that SHGO does not accept an initial point for the optimization process - which makes the whole “run the optimization from 100 different starting points for each benchmark” a bit moot.
I am no expert on SHGO, so maybe there is an alternative way to “simulate” the changing of the starting point for the optimization? Or maybe some other approach to make it consistent across optimizers?
Any suggestion is more than welcome.
Andrea.