
Dear Andrea, SHGO does not use an initial starting point, only the bounds (which may also be specified as none or infinite). The benchmarks that I ran used for the publication used the global minimum as a stopping criteria (together with performance profiles that demonstrate the final results). For this particular benchmarking framework I would propose simply using a single iteration ((dim)^2 +1 points) or specifying 100 starting points. A script to use 100 sampling points in a single iteration with the sobol sampling method: ``` result = shgo(obj_fun, bounds, n=100, sampling_method='sobol') ``` If you would like to add a more stochastic element to this performance I think the best approach would be to use a different seed for the sampling method (in my experience this does not make much of a difference to the performance in low dimensional problems), otherwise run shgo only once and/or with increasing numbers of iterations. Another possibility is to add a stochastic element to the bounds. Please let me know if you need any help. Best regards, Stefan Endres On Sun, Jul 26, 2020 at 4:06 PM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy developers & users,
I have a couple of new derivative-free, global optimization algorithms I’ve been working on lately - plus some improvements to AMPGO and a few more benchmark functions - and I’d like to rerun the benchmarks as I did back in 2013 (!!!).
In doing so, I’d like to remove some of the least interesting/worst performing algorithms (Firefly, MLSL, Galileo, the original DE) and replace them with the ones currently available in SciPy - differential_evolution, SHGO and dual_annealing.
Everything seems good and dandy, but it appears to me that SHGO does not accept an initial point for the optimization process - which makes the whole “run the optimization from 100 different starting points for each benchmark” a bit moot.
I am no expert on SHGO, so maybe there is an alternative way to “simulate” the changing of the starting point for the optimization? Or maybe some other approach to make it consistent across optimizers?
Any suggestion is more than welcome.
Andrea. _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering) Wissenchaftlicher Mitarbetier: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/