Global Optimization Benchmarks
Dear SciPy Developers & Users, long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013. This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ . The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function *generators*, from which we can actually create almost an unlimited number of unique test problems. Biggest news are: 1. This whole exercise is made up of *6,825* test problems divided across *16* different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at *3,859,786,025* - close to *4 billion*. Not bad. 2. A couple of "new" optimization algorithms I have ported to Python: - MCS: Multilevel Coordinate Search <https://www.mat.univie.ac.at/~neum/software/mcs/>, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. - BiteOpt: BITmask Evolution OPTimization <https://github.com/avaneev/biteopt> , I have converted the C++ code into Python and added a few, minor modifications. Enough chatting for now. The 13 tested algorithms are described here: http://infinity77.net/go_2021/ High level description & results of the 16 benchmarks: http://infinity77.net/go_2021/thebenchmarks.html Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities. List of tested algorithms: 1. *AMPGO*: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here: http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20... I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit <https://github.com/lmfit/lmfit-py>, I have improved it even more - in my opinion. 2. *BasinHopping*: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales: http://www-wales.ch.cam.ac.uk/ BasinHopping is now part of the standard SciPy distribution. 3. *BiteOpt*: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link: https://github.com/avaneev/biteopt I have converted the C++ code into Python and added a few, minor modifications. 4. *CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm: http://www.lri.fr/~hansen/cmaesintro.html http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) 5. *CRS2*: Controlled Random Search with Local Mutation, as implemented in the NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt> package: http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S... 6. *DE*: Differential Evolution, described in the following page: http://www1.icsi.berkeley.edu/~storn/code.html DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen... 7. *DIRECT*: the DIviding RECTangles procedure, described in: https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi... http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) 8. *DualAnnealing*: the Dual Annealing algorithm, taken directly from the SciPy implementation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann... 9. *LeapFrog*: the Leap Frog procedure, which I have been recommended for use, taken from: https://github.com/flythereddflagg/lpfgopt 10. *MCS*: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS <https://www.mat.univie.ac.at/~neum/software/ls/> and MINQ <https://www.mat.univie.ac.at/~neum/software/minq/>): https://www.mat.univie.ac.at/~neum/software/mcs/ I have added a few, minor improvements compared to the original implementation. See the MCS <http://infinity77.net/go_2021/mcs.html#mcs> section for a quick and dirty comparison between the Matlab code and my Python conversion. 11. *PSWARM*: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from: http://www.norg.uminho.pt/aivaz/pswarm/ 12. *SCE*: Shuffled Complex Evolution, described in: Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992. The version I used was graciously made available by Matthias Cuntz via a personal e-mail. 13. *SHGO*: Simplicial Homology Global Optimization, taken directly from the SciPy implementation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm... List of benchmark test suites: 1. SciPy Extended <http://infinity77.net/go_2021/scipy_extended.html#scipy-extended>: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima. I have added about 40 new functions to the standard SciPy benchmarks <https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmar...> and fixed a few bugs in the existing benchmark models in the SciPy repository. 2. GKLS <http://infinity77.net/go_2021/gkls.html#gkls>: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator <https://arxiv.org/abs/1103.2695>. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. 3. GlobOpt <http://infinity77.net/go_2021/globopt.html#globopt>: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython. *Many thanks* go to Professor Marco Locatelli for providing an updated copy of the C++ source code. 4. MMTFG <http://infinity77.net/go_2021/mmtfg.html#mmtfg>: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. 5. GOTPY <http://infinity77.net/go_2021/gotpy.html#gotpy>: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime. Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . 6. Huygens <http://infinity77.net/go_2021/huygens.html#huygens>: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions <https://www.cs.bham.ac.uk/research/projects/ecb/displayDataset.php?id=150>. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py <http://www.scipy.org/F2py>, then generating 600 2-dimensional test problems out of it. 7. LGMVG <http://infinity77.net/go_2021/lgmvg.html#lgmvg>: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. 8. NgLi <http://infinity77.net/go_2021/ngli.html#ngli>: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. 9. MPM2 <http://infinity77.net/go_2021/mpm2.html#mpm2>: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof <https://github.com/jakobbossek/smoof> library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. 10. RandomFields <http://infinity77.net/go_2021/randomfields.html#randomfields>: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. 11. NIST <http://infinity77.net/go_2021/nist.html#nist>: not exactly the realm of Global Optimization solvers, but the NIST StRD <https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml> dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit <https://github.com/lmfit/lmfit-py/tree/master/NIST_STRD>, thus creating 27 test problems with dimensionality ranging from 2 to 9. 12. GlobalLib <http://infinity77.net/go_2021/globallib.html#globallib>: Arnold Neumaier maintains <https://www.mat.univie.ac.at/~neum/glopt/coconut/Benchmark/Benchmark.html> a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python. The global minima are taken from Sahinidis <http://thales.cheme.cmu.edu/dfo/comparison/comp.html> or from Neumaier or refined using the NEOS server <https://neos-server.org/neos/> when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. 13. CVMG <http://infinity77.net/go_2021/cvmg.html#cvmg>: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid <https://en.wikipedia.org/wiki/Sigmoid_function> as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. 14. NLSE <http://infinity77.net/go_2021/nlse.html#nlse>: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications <https://www.sciencedirect.com/science/article/pii/S0898122111004299>, ALIAS/COPRIN <http://www-sop.inria.fr/coprin/logiciels/ALIAS/Benches/node1.html#SECTION000...> and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. 15. Schoen <http://infinity77.net/go_2021/schoen.html#schoen>: based on the early work of Fabio Schoen and his short note <https://link.springer.com/article/10.1007%2FBF01096734> on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. *Many thanks* go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. 16. Robust <http://infinity77.net/go_2021/robust.html#robust>: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6. Enjoy, and Happy 2021 :-) . Andrea.
Dear Andrea, that's a great resource you provide. From a quick skim over you mention a few bugs in the scipy codebase for the global benchmark problems. It would be great if you could provide more details (or even provide a PR?) so we can fix them. It would be even better if we could provide the 40 extra functions that you added on top of that (alternatively I could go through the list and figure out which ones scipy is missing). Note, that the Momil arxiv paper has quite a few mistakes in it as well, there are several functions where I actually found a better global minimum, or it was in a different place to that specified. I found this out by making a test setup for the suite. I also vaguely remember that there were also some functions where the definition was wrong, and I had to go to other papers to fix the issue. On Fri, 8 Jan 2021 at 20:21, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function *generators*, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
1. This whole exercise is made up of *6,825* test problems divided across *16* different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at *3,859,786,025* - close to *4 billion*. Not bad. 2. A couple of "new" optimization algorithms I have ported to Python:
- MCS: Multilevel Coordinate Search <https://www.mat.univie.ac.at/~neum/software/mcs/>, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. - BiteOpt: BITmask Evolution OPTimization <https://github.com/avaneev/biteopt> , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
1.
*AMPGO*: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit <https://github.com/lmfit/lmfit-py>, I have improved it even more - in my opinion. 2.
*BasinHopping*: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution. 3.
*BiteOpt*: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link:
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor modifications. 4.
*CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm:
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) 5.
*CRS2*: Controlled Random Search with Local Mutation, as implemented in the NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt> package:
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S... 6.
*DE*: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen... 7.
*DIRECT*: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) 8.
*DualAnnealing*: the Dual Annealing algorithm, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann... 9.
*LeapFrog*: the Leap Frog procedure, which I have been recommended for use, taken from:
https://github.com/flythereddflagg/lpfgopt 10.
*MCS*: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS <https://www.mat.univie.ac.at/~neum/software/ls/> and MINQ <https://www.mat.univie.ac.at/~neum/software/minq/>):
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original implementation. See the MCS <http://infinity77.net/go_2021/mcs.html#mcs> section for a quick and dirty comparison between the Matlab code and my Python conversion. 11.
*PSWARM*: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/ 12.
*SCE*: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz via a personal e-mail. 13.
*SHGO*: Simplicial Homology Global Optimization, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
1.
SciPy Extended <http://infinity77.net/go_2021/scipy_extended.html#scipy-extended>: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks <https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmar...> and fixed a few bugs in the existing benchmark models in the SciPy repository. 2.
GKLS <http://infinity77.net/go_2021/gkls.html#gkls>: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator <https://arxiv.org/abs/1103.2695>. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. 3.
GlobOpt <http://infinity77.net/go_2021/globopt.html#globopt>: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
*Many thanks* go to Professor Marco Locatelli for providing an updated copy of the C++ source code. 4.
MMTFG <http://infinity77.net/go_2021/mmtfg.html#mmtfg>: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. 5.
GOTPY <http://infinity77.net/go_2021/gotpy.html#gotpy>: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . 6.
Huygens <http://infinity77.net/go_2021/huygens.html#huygens>: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions <https://www.cs.bham.ac.uk/research/projects/ecb/displayDataset.php?id=150>. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py <http://www.scipy.org/F2py>, then generating 600 2-dimensional test problems out of it. 7.
LGMVG <http://infinity77.net/go_2021/lgmvg.html#lgmvg>: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. 8.
NgLi <http://infinity77.net/go_2021/ngli.html#ngli>: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. 9.
MPM2 <http://infinity77.net/go_2021/mpm2.html#mpm2>: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof <https://github.com/jakobbossek/smoof> library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. 10.
RandomFields <http://infinity77.net/go_2021/randomfields.html#randomfields>: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. 11.
NIST <http://infinity77.net/go_2021/nist.html#nist>: not exactly the realm of Global Optimization solvers, but the NIST StRD <https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml> dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit <https://github.com/lmfit/lmfit-py/tree/master/NIST_STRD>, thus creating 27 test problems with dimensionality ranging from 2 to 9. 12.
GlobalLib <http://infinity77.net/go_2021/globallib.html#globallib>: Arnold Neumaier maintains <https://www.mat.univie.ac.at/~neum/glopt/coconut/Benchmark/Benchmark.html> a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python.
The global minima are taken from Sahinidis <http://thales.cheme.cmu.edu/dfo/comparison/comp.html> or from Neumaier or refined using the NEOS server <https://neos-server.org/neos/> when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. 13.
CVMG <http://infinity77.net/go_2021/cvmg.html#cvmg>: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid <https://en.wikipedia.org/wiki/Sigmoid_function> as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. 14.
NLSE <http://infinity77.net/go_2021/nlse.html#nlse>: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications <https://www.sciencedirect.com/science/article/pii/S0898122111004299>, ALIAS/COPRIN <http://www-sop.inria.fr/coprin/logiciels/ALIAS/Benches/node1.html#SECTION000...> and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. 15.
Schoen <http://infinity77.net/go_2021/schoen.html#schoen>: based on the early work of Fabio Schoen and his short note <https://link.springer.com/article/10.1007%2FBF01096734> on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6.
*Many thanks* go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. 16.
Robust <http://infinity77.net/go_2021/robust.html#robust>: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea. _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- _____________________________________ Dr. Andrew Nelson _____________________________________
Hi Andrew, On Fri, 8 Jan 2021 at 10:41, Andrew Nelson <andyfaff@gmail.com> wrote:
Dear Andrea, that's a great resource you provide. From a quick skim over you mention a few bugs in the scipy codebase for the global benchmark problems. It would be great if you could provide more details (or even provide a PR?) so we can fix them. It would be even better if we could provide the 40 extra functions that you added on top of that (alternatively I could go through the list and figure out which ones scipy is missing).
Note, that the Momil arxiv paper has quite a few mistakes in it as well, there are several functions where I actually found a better global minimum, or it was in a different place to that specified. I found this out by making a test setup for the suite. I also vaguely remember that there were also some functions where the definition was wrong, and I had to go to other papers to fix the issue.
Yes, in fact for the SciPy Extended benchmarks I have taken the functions from the SciPy repository, with all the fixes and corrections you already made. That said, a few things I have found out are: 1. The global optimum for the Alpine02 function, with more significant digits, is self.fglob = -6.12950389 2. The global optimum for the Cola function, with more significant digits, is self.fglob = 11.74639029 3. The DevilliersGlasser02 objective function turned out to be impossible to solve because the *bounds* are wrong. it should be: self._bounds = list(zip([0.5] * self.N, [60.0] * self.N)) instead of self._bounds = list(zip([1.0] * self.N, [60.0] * self.N)) 4. The global optimum for the Hansen function, with more significant digits, is self.fglob = -176.54179313 I can't easily make a PR as I don't have much time to set up a proper environment on my computer (nor the privileges to do so...). However, I can easily find out which new functions I have added and potentially send the code to you, if you wish so. Andrea.
On Fri, 8 Jan 2021 at 20:21, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function *generators*, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
1. This whole exercise is made up of *6,825* test problems divided across *16* different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at *3,859,786,025* - close to *4 billion*. Not bad. 2. A couple of "new" optimization algorithms I have ported to Python:
- MCS: Multilevel Coordinate Search <https://www.mat.univie.ac.at/~neum/software/mcs/>, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. - BiteOpt: BITmask Evolution OPTimization <https://github.com/avaneev/biteopt> , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
1.
*AMPGO*: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit <https://github.com/lmfit/lmfit-py>, I have improved it even more - in my opinion. 2.
*BasinHopping*: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution. 3.
*BiteOpt*: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link:
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor modifications. 4.
*CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm:
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) 5.
*CRS2*: Controlled Random Search with Local Mutation, as implemented in the NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt> package:
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S... 6.
*DE*: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen... 7.
*DIRECT*: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) 8.
*DualAnnealing*: the Dual Annealing algorithm, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann... 9.
*LeapFrog*: the Leap Frog procedure, which I have been recommended for use, taken from:
https://github.com/flythereddflagg/lpfgopt 10.
*MCS*: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS <https://www.mat.univie.ac.at/~neum/software/ls/> and MINQ <https://www.mat.univie.ac.at/~neum/software/minq/>):
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original implementation. See the MCS <http://infinity77.net/go_2021/mcs.html#mcs> section for a quick and dirty comparison between the Matlab code and my Python conversion. 11.
*PSWARM*: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/ 12.
*SCE*: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz via a personal e-mail. 13.
*SHGO*: Simplicial Homology Global Optimization, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
1.
SciPy Extended <http://infinity77.net/go_2021/scipy_extended.html#scipy-extended>: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks <https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmar...> and fixed a few bugs in the existing benchmark models in the SciPy repository. 2.
GKLS <http://infinity77.net/go_2021/gkls.html#gkls>: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator <https://arxiv.org/abs/1103.2695>. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. 3.
GlobOpt <http://infinity77.net/go_2021/globopt.html#globopt>: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
*Many thanks* go to Professor Marco Locatelli for providing an updated copy of the C++ source code. 4.
MMTFG <http://infinity77.net/go_2021/mmtfg.html#mmtfg>: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. 5.
GOTPY <http://infinity77.net/go_2021/gotpy.html#gotpy>: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . 6.
Huygens <http://infinity77.net/go_2021/huygens.html#huygens>: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions <https://www.cs.bham.ac.uk/research/projects/ecb/displayDataset.php?id=150>. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py <http://www.scipy.org/F2py>, then generating 600 2-dimensional test problems out of it. 7.
LGMVG <http://infinity77.net/go_2021/lgmvg.html#lgmvg>: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. 8.
NgLi <http://infinity77.net/go_2021/ngli.html#ngli>: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. 9.
MPM2 <http://infinity77.net/go_2021/mpm2.html#mpm2>: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof <https://github.com/jakobbossek/smoof> library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. 10.
RandomFields <http://infinity77.net/go_2021/randomfields.html#randomfields>: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. 11.
NIST <http://infinity77.net/go_2021/nist.html#nist>: not exactly the realm of Global Optimization solvers, but the NIST StRD <https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml> dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit <https://github.com/lmfit/lmfit-py/tree/master/NIST_STRD>, thus creating 27 test problems with dimensionality ranging from 2 to 9. 12.
GlobalLib <http://infinity77.net/go_2021/globallib.html#globallib>: Arnold Neumaier maintains <https://www.mat.univie.ac.at/~neum/glopt/coconut/Benchmark/Benchmark.html> a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python.
The global minima are taken from Sahinidis <http://thales.cheme.cmu.edu/dfo/comparison/comp.html> or from Neumaier or refined using the NEOS server <https://neos-server.org/neos/> when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. 13.
CVMG <http://infinity77.net/go_2021/cvmg.html#cvmg>: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid <https://en.wikipedia.org/wiki/Sigmoid_function> as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. 14.
NLSE <http://infinity77.net/go_2021/nlse.html#nlse>: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications <https://www.sciencedirect.com/science/article/pii/S0898122111004299> , ALIAS/COPRIN <http://www-sop.inria.fr/coprin/logiciels/ALIAS/Benches/node1.html#SECTION000...> and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. 15.
Schoen <http://infinity77.net/go_2021/schoen.html#schoen>: based on the early work of Fabio Schoen and his short note <https://link.springer.com/article/10.1007%2FBF01096734> on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6.
*Many thanks* go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. 16.
Robust <http://infinity77.net/go_2021/robust.html#robust>: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea. _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- _____________________________________ Dr. Andrew Nelson
_____________________________________ _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
Hi Andrea, this is awesome! Thanks for sharing! This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still. MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy. If you have recommendations or takeaways from all this work for SciPy, I'd love to hear them. Cheers, Ralf
Hi Ralf, On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
Hi Andrea, this is awesome! Thanks for sharing!
I am happy you like it :-) .
This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still.
I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers.
MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy.
I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious.
If you have recommendations or takeaways from all this work for SciPy, I'd love to hear them.
I have a few thoughts, but please bear in mind that it's my opinion and only based on this exercise plus a couple of real-life problems I have been working on recently: 1. Assuming we are dealing with a low dimensional problem, SHGO is close to unbeatable. I have found a glitch when the number of variables gets 10+, or for repeated continuous optimizations, SHGO seems to require enormous amounts of memory: in the SciPy Extended benchmark, trying all the 100 restarts got my RAM consumption to 190 GB for SHGO only - not something you want to ty unless you have a monster machine like mine. 2. DualAnnealing is also extremely powerful - ranking consistently close to the top for most of the benchmarks. It probably requires some tuning of the parameters (which I haven't done), especially when the allowable number of functions evaluations is large. That said, you can clearly see DualAnnealing shining in the SciPy Extended, GKLS, LGMVG and RandomFields benchmarks. 3. BasinHopping and DifferentialEvolution are generally slightly weaker, at least on these benchmarks. That said, I have used both of them with great success on real life problems - albeit with generally generous budgets of functions evaluations. 4. Real-life-wise, I recently had three very tough problems to work on: one 9-dimensional objective function describing multi-phase decline curves for oil/gas wells, which I am now satisfactorily fitting with DualAnnealing. Another one on optimization of 3D well trajectories, which I am happily handing over to SHGO or MCS depending on the problem. And another one related to wind and renewable data fitting which DifferentialEvolution is handling quite well. So, all in all, benchmarks only give you so much information: real-life problems sometimes defy the accepted wisdom that occurs because of contrived (but synthetic) objective functions. That said, if I had to attack a new problem and I had no idea where to start, I would definitely give SHGO and DualAnnealing the first go, as they are quite powerful across a large spectrum of problems. In the end, I believe that SciPy will definitely benefit from the addition of a couple (few?) more robust global solvers, especially if they implement techniques that are completely different from the existing ones (such as DIRECT, MCS, BiteOpt, of course). Giving more options to users is always going to make people happy - but of course you have to balance it with the maintenance efforts in the library. Andrea.
On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
Hi Andrea, this is awesome! Thanks for sharing!
I am happy you like it :-) .
This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still.
I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers.
MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy.
I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious.
MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use.
If you have recommendations or takeaways from all this work for SciPy, I'd love to hear them.
I have a few thoughts, but please bear in mind that it's my opinion and only based on this exercise plus a couple of real-life problems I have been working on recently:
Thanks!
1. Assuming we are dealing with a low dimensional problem, SHGO is close to unbeatable. I have found a glitch when the number of variables gets 10+, or for repeated continuous optimizations, SHGO seems to require enormous amounts of memory: in the SciPy Extended benchmark, trying all the 100 restarts got my RAM consumption to 190 GB for SHGO only - not something you want to ty unless you have a monster machine like mine.
That seems like something we should improve. Cheers, Ralf
2. DualAnnealing is also extremely powerful - ranking consistently close to the top for most of the benchmarks. It probably requires some tuning of the parameters (which I haven't done), especially when the allowable number of functions evaluations is large. That said, you can clearly see DualAnnealing shining in the SciPy Extended, GKLS, LGMVG and RandomFields benchmarks.
3. BasinHopping and DifferentialEvolution are generally slightly weaker, at least on these benchmarks. That said, I have used both of them with great success on real life problems - albeit with generally generous budgets of functions evaluations.
4. Real-life-wise, I recently had three very tough problems to work on: one 9-dimensional objective function describing multi-phase decline curves for oil/gas wells, which I am now satisfactorily fitting with DualAnnealing. Another one on optimization of 3D well trajectories, which I am happily handing over to SHGO or MCS depending on the problem. And another one related to wind and renewable data fitting which DifferentialEvolution is handling quite well.
So, all in all, benchmarks only give you so much information: real-life problems sometimes defy the accepted wisdom that occurs because of contrived (but synthetic) objective functions. That said, if I had to attack a new problem and I had no idea where to start, I would definitely give SHGO and DualAnnealing the first go, as they are quite powerful across a large spectrum of problems.
In the end, I believe that SciPy will definitely benefit from the addition of a couple (few?) more robust global solvers, especially if they implement techniques that are completely different from the existing ones (such as DIRECT, MCS, BiteOpt, of course). Giving more options to users is always going to make people happy - but of course you have to balance it with the maintenance efforts in the library.
Andrea.
Hi Ralf, On Fri, 8 Jan 2021 at 12:15, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
Hi Andrea, this is awesome! Thanks for sharing!
I am happy you like it :-) .
This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still.
I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers.
MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy.
I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious.
MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use.
Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy had another, different "bound constrained indefinite quadratic programming" module then we could easily swap it :-) . Andrea.
On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 12:15, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
Hi Andrea, this is awesome! Thanks for sharing!
I am happy you like it :-) .
This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still.
I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers.
MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy.
I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious.
MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use.
Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy had another, different "bound constrained indefinite quadratic programming" module then we could easily swap it :-) .
MCS and MINQ are from the same author, so I'd expect the same restriction to apply to MCS though. We could ask for permission to license all that under BSD/MIT, sometimes that works - the author seems like the typical academic who doesn't understand open source licensing. In the past we've had success with explaining; given how much extra exposure/users MCS gets if it would be included in SciPy, it may be worth doing if someone is motivated to work on integrating MCS into SciPy. Cheers, Ralf
Awesome work, Andrea! Would it be possible for you to make your implementations of mcs and biteopt publicly available? And more out of curiosity, not directly related to scipy: since you work in an industry setting, did you compare these open source optimizers against commercial ones like knitro? Cheers, Daniel On Fri, 8 Jan 2021 at 12:28, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 12:15, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
Hi Andrea, this is awesome! Thanks for sharing!
I am happy you like it :-) .
This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still.
I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers.
MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy.
I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious.
MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use.
Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy had another, different "bound constrained indefinite quadratic programming" module then we could easily swap it :-) .
MCS and MINQ are from the same author, so I'd expect the same restriction to apply to MCS though. We could ask for permission to license all that under BSD/MIT, sometimes that works - the author seems like the typical academic who doesn't understand open source licensing. In the past we've had success with explaining; given how much extra exposure/users MCS gets if it would be included in SciPy, it may be worth doing if someone is motivated to work on integrating MCS into SciPy.
Cheers, Ralf
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Hi Daniel, On Sat, 9 Jan 2021 at 08.37, Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Awesome work, Andrea!
Thank you, I’m happy you liked it :-) . I have been told to try and condense the work in a paper and submit it to the Journal of Global Optimization, although I’m currently doubting that such a prestigious publication will accept a publication from a nobody like me.
Would it be possible for you to make your implementations of mcs and biteopt publicly available?
For BiteOpt I might be able to - after a bit of polishing, nothing major. For both I’ll have to ask for permission from my employer, as it was a little tour the force getting MCS right. And more out of curiosity, not directly related to scipy: since you work in
an industry setting, did you compare these open source optimizers against commercial ones like knitro?
Unfortunately we don’t have any commercial solver available - as far as I know - but I would love to redo the work using other solvers (open source or not). The very interesting exercise from Sahinidis et al: https://link.springer.com/content/pdf/10.1007/s10898-012-9951-y.pdf Although limited to a single set of benchmarks, show that commercial solvers like Tomlab are very efficient, tightly followed by MCS. It would be nice to try and apply the solvers in KNITRO, Tomlab, GlobSol (others?) to all the 16 benchmarks to see where open source stands, but I have no idea how it could be done as I have no licenses for those. Andrea.
Cheers,
Daniel
On Fri, 8 Jan 2021 at 12:28, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 12:15, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana < andrea.gavana@gmail.com> wrote:
> Dear SciPy Developers & Users, > > long time no see :-) . I thought to start 2021 with a bit of a > bang, to try and forget how bad 2020 has been... So I am happy to present > you with a revamped version of the Global Optimization Benchmarks from my > previous exercise in 2013. >
Hi Andrea, this is awesome! Thanks for sharing!
I am happy you like it :-) .
This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still.
I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers.
MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy.
I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious.
MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use.
Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy had another, different "bound constrained indefinite quadratic programming" module then we could easily swap it :-) .
MCS and MINQ are from the same author, so I'd expect the same restriction to apply to MCS though. We could ask for permission to license all that under BSD/MIT, sometimes that works - the author seems like the typical academic who doesn't understand open source licensing. In the past we've had success with explaining; given how much extra exposure/users MCS gets if it would be included in SciPy, it may be worth doing if someone is motivated to work on integrating MCS into SciPy.
Cheers, Ralf
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
On Sat, 9 Jan 2021 at 09.12, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Daniel,
On Sat, 9 Jan 2021 at 08.37, Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Awesome work, Andrea!
Thank you, I’m happy you liked it :-) . I have been told to try and condense the work in a paper and submit it to the Journal of Global Optimization, although I’m currently doubting that such a prestigious publication will accept a publication from a nobody like me.
Would it be possible for you to make your implementations of mcs and biteopt publicly available?
For BiteOpt I might be able to - after a bit of polishing, nothing major. For both I’ll have to ask for permission from my employer, as it was a little tour the force getting MCS right.
That said, there is a publicly available Python wrapper for BiteOpt here: https://github.com/leonidk/biteopt I haven’t used it myself, I decided to rewrite the algorithm in Python as I wanted to understand what the code was doing and I wanted to add a couple of (minor) modifications in order to try and make the solver more efficient. Andrea.
And more out of curiosity, not directly related to scipy: since you work
in an industry setting, did you compare these open source optimizers against commercial ones like knitro?
Unfortunately we don’t have any commercial solver available - as far as I know - but I would love to redo the work using other solvers (open source or not). The very interesting exercise from Sahinidis et al:
https://link.springer.com/content/pdf/10.1007/s10898-012-9951-y.pdf
Although limited to a single set of benchmarks, show that commercial solvers like Tomlab are very efficient, tightly followed by MCS. It would be nice to try and apply the solvers in KNITRO, Tomlab, GlobSol (others?) to all the 16 benchmarks to see where open source stands, but I have no idea how it could be done as I have no licenses for those.
Andrea.
Cheers,
Daniel
On Fri, 8 Jan 2021 at 12:28, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 12:15, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> wrote:
> > > On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana < > andrea.gavana@gmail.com> wrote: > >> Dear SciPy Developers & Users, >> >> long time no see :-) . I thought to start 2021 with a bit of a >> bang, to try and forget how bad 2020 has been... So I am happy to present >> you with a revamped version of the Global Optimization Benchmarks from my >> previous exercise in 2013. >> > > Hi Andrea, this is awesome! Thanks for sharing! >
I am happy you like it :-) .
> This could be really useful to link to and use as a guide for > providing recommendations for solvers to use in the scipy.optimize > tutorials. It's good to see that SciPy overall is much more competitive > than it was in 2013. Overall it seems SHGO is our most accurate solver, and > making it faster seems worthwhile. That shouldn't be very difficult, given > that it's all pure Python still. >
I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers.
> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed > and seem the best candidates to be considered for inclusion in SciPy. >
I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious.
MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use.
Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy had another, different "bound constrained indefinite quadratic programming" module then we could easily swap it :-) .
MCS and MINQ are from the same author, so I'd expect the same restriction to apply to MCS though. We could ask for permission to license all that under BSD/MIT, sometimes that works - the author seems like the typical academic who doesn't understand open source licensing. In the past we've had success with explaining; given how much extra exposure/users MCS gets if it would be included in SciPy, it may be worth doing if someone is motivated to work on integrating MCS into SciPy.
Cheers, Ralf
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Thanks Andrea, will give biteopt a try. At first glance though, despite the "sparse" documentation it should be possible to work after checking the examples. Regarding a publication, I would not be so pessimistic: the sheer amount of benchmarks you carried out is very interesting for the optimization community! The paper you attached is very interesting, Tomlab seems to be superior for hard problems in general. Best, Daniel On Sat, 9 Jan 2021 at 09:31, Andrea Gavana <andrea.gavana@gmail.com> wrote:
On Sat, 9 Jan 2021 at 09.12, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Daniel,
On Sat, 9 Jan 2021 at 08.37, Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Awesome work, Andrea!
Thank you, I’m happy you liked it :-) . I have been told to try and condense the work in a paper and submit it to the Journal of Global Optimization, although I’m currently doubting that such a prestigious publication will accept a publication from a nobody like me.
Would it be possible for you to make your implementations of mcs and biteopt publicly available?
For BiteOpt I might be able to - after a bit of polishing, nothing major. For both I’ll have to ask for permission from my employer, as it was a little tour the force getting MCS right.
That said, there is a publicly available Python wrapper for BiteOpt here:
https://github.com/leonidk/biteopt
I haven’t used it myself, I decided to rewrite the algorithm in Python as I wanted to understand what the code was doing and I wanted to add a couple of (minor) modifications in order to try and make the solver more efficient.
Andrea.
And more out of curiosity, not directly related to scipy: since you work
in an industry setting, did you compare these open source optimizers against commercial ones like knitro?
Unfortunately we don’t have any commercial solver available - as far as I know - but I would love to redo the work using other solvers (open source or not). The very interesting exercise from Sahinidis et al:
https://link.springer.com/content/pdf/10.1007/s10898-012-9951-y.pdf
Although limited to a single set of benchmarks, show that commercial solvers like Tomlab are very efficient, tightly followed by MCS. It would be nice to try and apply the solvers in KNITRO, Tomlab, GlobSol (others?) to all the 16 benchmarks to see where open source stands, but I have no idea how it could be done as I have no licenses for those.
Andrea.
Cheers,
Daniel
On Fri, 8 Jan 2021 at 12:28, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Ralf,
On Fri, 8 Jan 2021 at 12:15, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana < andrea.gavana@gmail.com> wrote:
> Hi Ralf, > > On Fri, 8 Jan 2021 at 11:07, Ralf Gommers <ralf.gommers@gmail.com> > wrote: > >> >> >> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana < >> andrea.gavana@gmail.com> wrote: >> >>> Dear SciPy Developers & Users, >>> >>> long time no see :-) . I thought to start 2021 with a bit of a >>> bang, to try and forget how bad 2020 has been... So I am happy to present >>> you with a revamped version of the Global Optimization Benchmarks from my >>> previous exercise in 2013. >>> >> >> Hi Andrea, this is awesome! Thanks for sharing! >> > > I am happy you like it :-) . > > > >> This could be really useful to link to and use as a guide for >> providing recommendations for solvers to use in the scipy.optimize >> tutorials. It's good to see that SciPy overall is much more competitive >> than it was in 2013. Overall it seems SHGO is our most accurate solver, and >> making it faster seems worthwhile. That shouldn't be very difficult, given >> that it's all pure Python still. >> > > I have to say that, compared to back in 2013, the addition of SHGO > and DualAnnealing to SciPy has made the global optimization world in SciPy > much more powerful, pretty much at the top of what can currently be done > with open source solvers. > > >> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed >> and seem the best candidates to be considered for inclusion in SciPy. >> > > I couldn't find a license restriction for MCS, but maybe I haven't > looked hard enough... Do you have a link for it? I am just curious. >
MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use.
Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy had another, different "bound constrained indefinite quadratic programming" module then we could easily swap it :-) .
MCS and MINQ are from the same author, so I'd expect the same restriction to apply to MCS though. We could ask for permission to license all that under BSD/MIT, sometimes that works - the author seems like the typical academic who doesn't understand open source licensing. In the past we've had success with explaining; given how much extra exposure/users MCS gets if it would be included in SciPy, it may be worth doing if someone is motivated to work on integrating MCS into SciPy.
Cheers, Ralf
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Hi Andrea,
On 8. Jan 2021, at 10:20, Andrea Gavana <andrea.gavana@gmail.com> wrote:
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
thank you for sharing this. I was going to point out the "No Free Lunch Theorem" but you mention it yourself on your website, good. I have a few questions/comments: - The exclusion of gradient-based solvers is unfortunate. It is of course up to you what you investigate, but gradient-based solvers are surely useful in practice. Not all real-life problems involve a (non-analytical) simulation. - "This effort stems from the fact that I got fed up with the current attitude of most mathematicians/numerical optimization experts, who tend to demonstrate the advantages of an algorithm based on “elapsed time” or “CPU time” or similar meaningless performance indicators." I don't know where you get that from, what are your sources? I have never seen an academic paper that used elapsed time or CPU time. The scientific papers I have read use the number of function evaluations to compare performance, which is a meaningful machine-independent performance measure if the total time is dominated by the time spend in the function, as it usually is. - I feel uneasy about your performance measure. It is whether the solver finds the minimum value of the function (why not the location of the minimum? Isn't that usually of interest?) within some fixed tolerance in 2000 function evaluations. a) The maximum number of function evaluations that you use does not depend on the dimensionality of the problem, but it clearly should. The search space is larger in higher dimensional problems, so more evaluations are needed by any algorithm. That is also obvious from your results. b) Instead of recording a binary outcome (success/failure to find the minimum in N evaluations), I think it would be more useful to record the number of evaluations until the minimum is reached and then give the mean or median number of function evaluations over many trials as well as the percentage of successful convergences. Algorithms can be ranked by robustness (the number of correctly solved problems) and convergence rate (the average/median number of function evaluations). Robustness and convergence rate may be anti-correlated. You are mixing the two in your performance measure, which makes it more difficult to interpret. - http://infinity77.net/global_optimization/ does not list the same algorithms as the table in http://infinity77.net/go_2021/thebenchmarks.html#info-general-results - While I think we agree that CPU-time is not a useful means to compare algorithms, it is then quite surprising to see Fig 0.6 with the CPU times. I suppose the pure-Python implementations perform so badly because the benchmark functions are all rather small python functions which are quick to evaluate so that the time spend inside the solver does matter. - I see some an inconsistency between your declared goals and your benchmark. You say you only care about optimization of non-analytical functions in which the function computation involves a simulation, but then most of your benchmark functions are analytical functions. In other words, you do not measure the performance on non-analytical functions. It is quite possible that the ranking would look different if you used non-analytical functions in the benchmarks. - I would prefer to read technical documents written in a more professional detached writing style and I think others would do, too. Regards, Hans
Hi Hans, On Mon, 11 Jan 2021 at 11:47, Hans Dembinski <hans.dembinski@gmail.com> wrote: > Hi Andrea, > > > On 8. Jan 2021, at 10:20, Andrea Gavana <andrea.gavana@gmail.com> wrote: > > > > long time no see :-) . I thought to start 2021 with a bit of a bang, > to try and forget how bad 2020 has been... So I am happy to present you > with a revamped version of the Global Optimization Benchmarks from my > previous exercise in 2013. > > > > This new set of benchmarks pretty much superseeds - and greatly expands > - the previous analysis that you can find at this location: > http://infinity77.net/global_optimization/ . > > thank you for sharing this. I was going to point out the "No Free Lunch > Theorem" but you mention it yourself on your website, good. > Thank you for your comments. Before trying to answer your concerns, let me clarify a few things as it seems to me we are mixing two sets of results in the comments: 1. This page: http://infinity77.net/global_optimization/ represents my old work on global optimization (dated 2013), and I consider it now obsolete and superseded by the work at point (2) 2. This page: http://infinity77.net/go_2021/index.html (and subsequent pages) is now my latest reference. If you read the description (and in particular the section about "The Rules"), you will see that they are not quite the same as they were in point (1). > > I have a few questions/comments: > > - The exclusion of gradient-based solvers is unfortunate. It is of course > up to you what you investigate, but gradient-based solvers are surely > useful in practice. Not all real-life problems involve a (non-analytical) > simulation. > I welcome suggestions on which *global* optimization algorithms are gradient-based and can be applied relatively easily using Python, NumPy, SciPy. Note that there is no mention of gradient-based or non gradient-based algorithms in the page http://infinity77.net/go_2021/index.html. > > - "This effort stems from the fact that I got fed up with the current > attitude of most mathematicians/numerical optimization experts, who tend to > demonstrate the advantages of an algorithm based on “elapsed time” or “CPU > time” or similar meaningless performance indicators." I don't know where > you get that from, what are your sources? I have never seen an academic > paper that used elapsed time or CPU time. The scientific papers I have read > use the number of function evaluations to compare performance, which is a > meaningful machine-independent performance measure if the total time is > dominated by the time spend in the function, as it usually is. > While I agree that in recent years the shift to use functions evaluations instead of CPU time has greatly prevailed, that was not the case a while back. A few examples: https://www.mat.univie.ac.at/~neum/ms/comparison.pdf https://arxiv.org/pdf/1709.08242.pdf But since that sentence seems to be annoying, I'll just remove it :-) . > > - I feel uneasy about your performance measure. It is whether the solver > finds the minimum value of the function (why not the location of the > minimum? Isn't that usually of interest?) within some fixed tolerance in > 2000 function evaluations. > I have both the function value and the location of the minimum. Of course they are both important, and of course I will use the location of the minimum to do further analysis. The point of the exercise is: I know where the global optimum (optima) lies, can the solver find it with a specific tolerance? Most benchmarks are designed like this. The 2,000 is not a hard limit anymore per se, as you will notice I have run all the 16 benchmarks with different stopping conditions in terms of maximum number of function evaluations. Specifically, all the benchmarks have been run 22 times, for each run limiting the function evaluations budget to 100, then 200, 300, 400, ..., 2000, 5000, 10000. Some of the benchmarks have been extended to 50,000 (http://infinity77.net/go_2021/globopt.html). > a) The maximum number of function evaluations that you use does not depend > on the dimensionality of the problem, but it clearly should. The search > space is larger in higher dimensional problems, so more evaluations are > needed by any algorithm. That is also obvious from your results. > I agree, and this is why I have run all the benchmarks multiple times with varying budgets of function evaluations, and which is also why many benchmarks have a "Dimensionality Effect" chapter to look at what happens when the problem grows in size ( http://infinity77.net/go_2021/gkls.html#size-dimensionality-effects, http://infinity77.net/go_2021/mmtfg.html#size-dimensionality-effects , many others). That said, most of the problems I usually deal with are computationally expensive - so if one day my model has two parameters to tune and I allow the algorithm to have 500 function evaluations, that may take me one week to solve. If the month after it is decided that the model is better described by a function of 10 parameters, I am not going to ask for 2,500 simulations - it's going to take me a month and a half to get the results back. The solver will have to deal with a 10-parameter model in 500-600 evaluations anyway. > b) Instead of recording a binary outcome (success/failure to find the > minimum in N evaluations), I think it would be more useful to record the > number of evaluations until the minimum is reached and then give the mean > or median number of function evaluations over many trials as well as the > percentage of successful convergences. Algorithms can be ranked by > robustness (the number of correctly solved problems) and convergence rate > (the average/median number of function evaluations). Robustness and > convergence rate may be anti-correlated. You are mixing the two in your > performance measure, which makes it more difficult to interpret. > This is exactly what has been done for the SciPy Extended benchmark. http://infinity77.net/go_2021/scipy_extended.html#test-functions-general-solvers-performances tells you that this specific benchmark has been run considering for every benchmark function 100 random starting points, and the reported statistics (overall success, number of functions evaluations) refer to successful optimizations only. I haven't repeated the 100 random starting points for the other 15 benchmarks because it would take me forever to run them like that. > > - http://infinity77.net/global_optimization/ does not list the same > algorithms as the table in > http://infinity77.net/go_2021/thebenchmarks.html#info-general-results please see above: http://infinity77.net/global_optimization is old and should not be looked at anymore. > - While I think we agree that CPU-time is not a useful means to compare > algorithms, it is then quite surprising to see Fig 0.6 with the CPU times. > I suppose the pure-Python implementations perform so badly because the > benchmark functions are all rather small python functions which are quick > to evaluate so that the time spend inside the solver does matter. > That's generally correct. It was just a way for me to say that some of the solvers are inherently slower than others, although for a real life problem it shouldn't matter so much. > > - I see some an inconsistency between your declared goals and your > benchmark. You say you only care about optimization of non-analytical > functions in which the function computation involves a simulation, but then > most of your benchmark functions are analytical functions. In other words, > you do not measure the performance on non-analytical functions. It is quite > possible that the ranking would look different if you used non-analytical > functions in the benchmarks. > The problems in the benchmarks are analytical (or almost analytical, not all benchmarks are like that - did you take a look at http://infinity77.net/go_2021/huygens.html ? ) because the benchmarks are designed to be that way. In global optimization, a benchmark is designed to try and reproduce features that a real-life objective function may exhibit. In the event that I will ever manage to get a paper published on this exercise I will definitely include real-life problems in the analysis, as this is always my aim. The usefulness of this set of benchmarks come from the fact that, after all this monster analysis with 16 different test suites, I can approach a new, real-life study by saying: "look, I don't know which solvers will be the best, but I will definitely start with MCS, SHGO or Dual Annealing".. > > - I would prefer to read technical documents written in a more > professional detached writing style and I think others would do, too. > That will probably follow if and when (if ever) a paper may be published about this. That said, I welcome, any time, corrections or modifications to the writings - I understand that some of the paragraphs I have written can be seen as less professional than needed, so I will be happy to change them. Andrea. > Regards, > Hans > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev@python.org > https://mail.python.org/mailman/listinfo/scipy-dev >
Dear Andrea, Thank you very much for this detailed analysis. I don't think I've seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions: - Many of the commercial algorithms offer free licenses for benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use. - The collection of test suites you've garnered could be immensely useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? - - In this case I would also like to volunteer to run some of the commercial solvers on the benchmark suite. - It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ). Comments on shgo: - High RAM use in higher dimensions: - In the higher dimensions the new simplicial sampling can be used (not pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. - An exception to this "rule" is when special grey box information such as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or pre-computed bounds on the Lipschitz constants is known. - In the symmetry case SHGO can solve these by supplying the `symmetry` option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). - I have had a few reports of memory leaks from various users. I have spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully, this will happen in the summer semester of 2021. Thank you again for compiling this large set of benchmark results. Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function *generators*, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
1. This whole exercise is made up of *6,825* test problems divided across *16* different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at *3,859,786,025* - close to *4 billion*. Not bad. 2. A couple of "new" optimization algorithms I have ported to Python:
- MCS: Multilevel Coordinate Search <https://www.mat.univie.ac.at/~neum/software/mcs/>, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. - BiteOpt: BITmask Evolution OPTimization <https://github.com/avaneev/biteopt> , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
1.
*AMPGO*: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit <https://github.com/lmfit/lmfit-py>, I have improved it even more - in my opinion. 2.
*BasinHopping*: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution. 3.
*BiteOpt*: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link:
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor modifications. 4.
*CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm:
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) 5.
*CRS2*: Controlled Random Search with Local Mutation, as implemented in the NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt> package:
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S... 6.
*DE*: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen... 7.
*DIRECT*: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) 8.
*DualAnnealing*: the Dual Annealing algorithm, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann... 9.
*LeapFrog*: the Leap Frog procedure, which I have been recommended for use, taken from:
https://github.com/flythereddflagg/lpfgopt 10.
*MCS*: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS <https://www.mat.univie.ac.at/~neum/software/ls/> and MINQ <https://www.mat.univie.ac.at/~neum/software/minq/>):
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original implementation. See the MCS <http://infinity77.net/go_2021/mcs.html#mcs> section for a quick and dirty comparison between the Matlab code and my Python conversion. 11.
*PSWARM*: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/ 12.
*SCE*: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz via a personal e-mail. 13.
*SHGO*: Simplicial Homology Global Optimization, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
1.
SciPy Extended <http://infinity77.net/go_2021/scipy_extended.html#scipy-extended>: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks <https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmar...> and fixed a few bugs in the existing benchmark models in the SciPy repository. 2.
GKLS <http://infinity77.net/go_2021/gkls.html#gkls>: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator <https://arxiv.org/abs/1103.2695>. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. 3.
GlobOpt <http://infinity77.net/go_2021/globopt.html#globopt>: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
*Many thanks* go to Professor Marco Locatelli for providing an updated copy of the C++ source code. 4.
MMTFG <http://infinity77.net/go_2021/mmtfg.html#mmtfg>: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. 5.
GOTPY <http://infinity77.net/go_2021/gotpy.html#gotpy>: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . 6.
Huygens <http://infinity77.net/go_2021/huygens.html#huygens>: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions <https://www.cs.bham.ac.uk/research/projects/ecb/displayDataset.php?id=150>. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py <http://www.scipy.org/F2py>, then generating 600 2-dimensional test problems out of it. 7.
LGMVG <http://infinity77.net/go_2021/lgmvg.html#lgmvg>: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. 8.
NgLi <http://infinity77.net/go_2021/ngli.html#ngli>: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. 9.
MPM2 <http://infinity77.net/go_2021/mpm2.html#mpm2>: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof <https://github.com/jakobbossek/smoof> library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. 10.
RandomFields <http://infinity77.net/go_2021/randomfields.html#randomfields>: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. 11.
NIST <http://infinity77.net/go_2021/nist.html#nist>: not exactly the realm of Global Optimization solvers, but the NIST StRD <https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml> dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit <https://github.com/lmfit/lmfit-py/tree/master/NIST_STRD>, thus creating 27 test problems with dimensionality ranging from 2 to 9. 12.
GlobalLib <http://infinity77.net/go_2021/globallib.html#globallib>: Arnold Neumaier maintains <https://www.mat.univie.ac.at/~neum/glopt/coconut/Benchmark/Benchmark.html> a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python.
The global minima are taken from Sahinidis <http://thales.cheme.cmu.edu/dfo/comparison/comp.html> or from Neumaier or refined using the NEOS server <https://neos-server.org/neos/> when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. 13.
CVMG <http://infinity77.net/go_2021/cvmg.html#cvmg>: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid <https://en.wikipedia.org/wiki/Sigmoid_function> as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. 14.
NLSE <http://infinity77.net/go_2021/nlse.html#nlse>: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications <https://www.sciencedirect.com/science/article/pii/S0898122111004299>, ALIAS/COPRIN <http://www-sop.inria.fr/coprin/logiciels/ALIAS/Benches/node1.html#SECTION000...> and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. 15.
Schoen <http://infinity77.net/go_2021/schoen.html#schoen>: based on the early work of Fabio Schoen and his short note <https://link.springer.com/article/10.1007%2FBF01096734> on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6.
*Many thanks* go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. 16.
Robust <http://infinity77.net/go_2021/robust.html#robust>: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea. _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering) Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/
Hi Stefan, You’re most welcome :-) . I’m happy the experts in the community are commenting and suggesting things, and constructive criticism is also always welcome. On Sun, 17 Jan 2021 at 12.11, Stefan Endres <stefan.c.endres@gmail.com> wrote:
Dear Andrea,
Thank you very much for this detailed analysis. I don't think I've seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions:
- Many of the commercial algorithms offer free licenses for benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use.
I’m definitely willing to include those commercial algorithms. The test suite per se is almost completely automated, so it’s not that complicated to add one or more solvers. I’m generally more inclined in testing open source algorithms but there’s nothing stopping the inclusion of commercial ones. I welcome any suggestions related to commercial solvers, as long as they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy).
- - The collection of test suites you've garnered could be immensely useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? - - In this case I would also like to volunteer to run some of the commercial solvers on the benchmark suite. - It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ).
I’m still sorting out all the implications related to a potential paper with my employer, but as far as I can see there shouldn’t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source.
-
Comments on shgo:
- High RAM use in higher dimensions: - In the higher dimensions the new simplicial sampling can be used (not pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. - An exception to this "rule" is when special grey box information such as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or pre-computed bounds on the Lipschitz constants is known. - In the symmetry case SHGO can solve these by supplying the `symmetry` option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). - I have had a few reports of memory leaks from various users. I have spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully, this will happen in the summer semester of 2021.
To be honest I wouldn’t be so concerned in general: SHGO is an excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe. Andrea.
-
Thank you again for compiling this large set of benchmark results.
Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function *generators*, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
1. This whole exercise is made up of *6,825* test problems divided across *16* different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at *3,859,786,025* - close to *4 billion*. Not bad. 2. A couple of "new" optimization algorithms I have ported to Python:
- MCS: Multilevel Coordinate Search <https://www.mat.univie.ac.at/~neum/software/mcs/>, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. - BiteOpt: BITmask Evolution OPTimization <https://github.com/avaneev/biteopt> , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
1.
*AMPGO*: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit <https://github.com/lmfit/lmfit-py>, I have improved it even more - in my opinion. 2.
*BasinHopping*: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution. 3.
*BiteOpt*: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link:
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor modifications. 4.
*CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm:
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) 5.
*CRS2*: Controlled Random Search with Local Mutation, as implemented in the NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt> package:
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S... 6.
*DE*: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen... 7.
*DIRECT*: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) 8.
*DualAnnealing*: the Dual Annealing algorithm, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann... 9.
*LeapFrog*: the Leap Frog procedure, which I have been recommended for use, taken from:
https://github.com/flythereddflagg/lpfgopt 10.
*MCS*: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS <https://www.mat.univie.ac.at/~neum/software/ls/> and MINQ <https://www.mat.univie.ac.at/~neum/software/minq/>):
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original implementation. See the MCS <http://infinity77.net/go_2021/mcs.html#mcs> section for a quick and dirty comparison between the Matlab code and my Python conversion. 11.
*PSWARM*: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/ 12.
*SCE*: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz via a personal e-mail. 13.
*SHGO*: Simplicial Homology Global Optimization, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
1.
SciPy Extended <http://infinity77.net/go_2021/scipy_extended.html#scipy-extended>: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks <https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmar...> and fixed a few bugs in the existing benchmark models in the SciPy repository. 2.
GKLS <http://infinity77.net/go_2021/gkls.html#gkls>: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator <https://arxiv.org/abs/1103.2695>. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. 3.
GlobOpt <http://infinity77.net/go_2021/globopt.html#globopt>: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
*Many thanks* go to Professor Marco Locatelli for providing an updated copy of the C++ source code. 4.
MMTFG <http://infinity77.net/go_2021/mmtfg.html#mmtfg>: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. 5.
GOTPY <http://infinity77.net/go_2021/gotpy.html#gotpy>: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . 6.
Huygens <http://infinity77.net/go_2021/huygens.html#huygens>: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions <https://www.cs.bham.ac.uk/research/projects/ecb/displayDataset.php?id=150>. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py <http://www.scipy.org/F2py>, then generating 600 2-dimensional test problems out of it. 7.
LGMVG <http://infinity77.net/go_2021/lgmvg.html#lgmvg>: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. 8.
NgLi <http://infinity77.net/go_2021/ngli.html#ngli>: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. 9.
MPM2 <http://infinity77.net/go_2021/mpm2.html#mpm2>: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof <https://github.com/jakobbossek/smoof> library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. 10.
RandomFields <http://infinity77.net/go_2021/randomfields.html#randomfields>: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. 11.
NIST <http://infinity77.net/go_2021/nist.html#nist>: not exactly the realm of Global Optimization solvers, but the NIST StRD <https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml> dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit <https://github.com/lmfit/lmfit-py/tree/master/NIST_STRD>, thus creating 27 test problems with dimensionality ranging from 2 to 9. 12.
GlobalLib <http://infinity77.net/go_2021/globallib.html#globallib>: Arnold Neumaier maintains <https://www.mat.univie.ac.at/~neum/glopt/coconut/Benchmark/Benchmark.html> a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python.
The global minima are taken from Sahinidis <http://thales.cheme.cmu.edu/dfo/comparison/comp.html> or from Neumaier or refined using the NEOS server <https://neos-server.org/neos/> when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. 13.
CVMG <http://infinity77.net/go_2021/cvmg.html#cvmg>: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid <https://en.wikipedia.org/wiki/Sigmoid_function> as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. 14.
NLSE <http://infinity77.net/go_2021/nlse.html#nlse>: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications <https://www.sciencedirect.com/science/article/pii/S0898122111004299> , ALIAS/COPRIN <http://www-sop.inria.fr/coprin/logiciels/ALIAS/Benches/node1.html#SECTION000...> and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. 15.
Schoen <http://infinity77.net/go_2021/schoen.html#schoen>: based on the early work of Fabio Schoen and his short note <https://link.springer.com/article/10.1007%2FBF01096734> on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6.
*Many thanks* go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. 16.
Robust <http://infinity77.net/go_2021/robust.html#robust>: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea.
_______________________________________________
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering)
Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany <https://www.google.com/maps/search/Badgasteiner+Stra%C3%9Fe+3,+28359+Bremen,+Germany?entry=gmail&source=g> Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/ _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Hey all, after Andrea's suggestions, I did an extensive github search and found several global optimization python libraries which mimic the scipy API, so that a user only has to change the import statements. Could it be useful to add a page in the documentation of these? Non exhaustive list: DIRECT: https://github.com/andim/scipydirect DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy PSO: https://github.com/jerrytheo/psopy Powell's derivative free optimizers: https://www.pdfo.net/index.html As DIRECT was very competitive on some of Andrea's benchmarks, it could be useful to mimic the scipydirect repo for inclusion into scipy (MIT license). The code is unfortunately a f2py port of the original Fortran implementation which has hard coded bounds on the number of function evaluations (90.000) and iterations (6.000). Any opinions on this? I personally am very impressed by biteopt's performance, and although it ranked very high in other global optimization benchmarks there is no formal paper on it yet. I understand the scipy guidelines in a way that such a paper is a requisite for inclusion into scipy. Best, Daniel Daniel On Sun, 17 Jan 2021 at 14:33, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Stefan,
You’re most welcome :-) . I’m happy the experts in the community are commenting and suggesting things, and constructive criticism is also always welcome.
On Sun, 17 Jan 2021 at 12.11, Stefan Endres <stefan.c.endres@gmail.com> wrote:
Dear Andrea,
Thank you very much for this detailed analysis. I don't think I've seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions:
Many of the commercial algorithms offer free licenses for benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use.
I’m definitely willing to include those commercial algorithms. The test suite per se is almost completely automated, so it’s not that complicated to add one or more solvers. I’m generally more inclined in testing open source algorithms but there’s nothing stopping the inclusion of commercial ones.
I welcome any suggestions related to commercial solvers, as long as they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy).
The collection of test suites you've garnered could be immensely useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)?
In this case I would also like to volunteer to run some of the commercial solvers on the benchmark suite. It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ).
I’m still sorting out all the implications related to a potential paper with my employer, but as far as I can see there shouldn’t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source.
Comments on shgo:
High RAM use in higher dimensions:
In the higher dimensions the new simplicial sampling can be used (not pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work.
An exception to this "rule" is when special grey box information such as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or pre-computed bounds on the Lipschitz constants is known.
In the symmetry case SHGO can solve these by supplying the `symmetry` option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite).
I have had a few reports of memory leaks from various users. I have spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully, this will happen in the summer semester of 2021.
To be honest I wouldn’t be so concerned in general: SHGO is an excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe.
Andrea.
Thank you again for compiling this large set of benchmark results.
Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013.
This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function generators, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
This whole exercise is made up of 6,825 test problems divided across 16 different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at 3,859,786,025 - close to 4 billion. Not bad. A couple of "new" optimization algorithms I have ported to Python:
MCS: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit, I have improved it even more - in my opinion.
BasinHopping: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution.
BiteOpt: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link:
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor modifications.
CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm:
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm)
CRS2: Controlled Random Search with Local Mutation, as implemented in the NLOpt package:
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S...
DE: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen...
DIRECT: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm)
DualAnnealing: the Dual Annealing algorithm, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann...
LeapFrog: the Leap Frog procedure, which I have been recommended for use, taken from:
https://github.com/flythereddflagg/lpfgopt
MCS: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ):
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion.
PSWARM: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/
SCE: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz via a personal e-mail.
SHGO: Simplicial Homology Global Optimization, taken directly from the SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
SciPy Extended: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks and fixed a few bugs in the existing benchmark models in the SciPy repository.
GKLS: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python.
GlobOpt: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
Many thanks go to Professor Marco Locatelli for providing an updated copy of the C++ source code.
MMTFG: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython.
GOTPY: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng .
Huygens: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py, then generating 600 2-dimensional test problems out of it.
LGMVG: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5.
NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology.
MPM2: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5.
RandomFields: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself.
NIST: not exactly the realm of Global Optimization solvers, but the NIST StRD dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit, thus creating 27 test problems with dimensionality ranging from 2 to 9.
GlobalLib: Arnold Neumaier maintains a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python.
The global minima are taken from Sahinidis or from Neumaier or refined using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9.
CVMG: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5.
NLSE: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications, ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8.
Schoen: based on the early work of Fabio Schoen and his short note on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6.
Many thanks go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications.
Robust: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea.
_______________________________________________
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering)
Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/ _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Hi Daniel, Sorry for the extremely delayed reply. On Thu, Mar 11, 2021 at 10:19 AM Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey all,
after Andrea's suggestions, I did an extensive github search and found several global optimization python libraries which mimic the scipy API, so that a user only has to change the import statements. Could it be useful to add a page in the documentation of these?
This does sound useful. Probably not a whole page, more like a note in the `minimize` docstring with references I'd think.
Non exhaustive list:
DIRECT: https://github.com/andim/scipydirect DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy PSO: https://github.com/jerrytheo/psopy Powell's derivative free optimizers: https://www.pdfo.net/index.html
As DIRECT was very competitive on some of Andrea's benchmarks, it could be useful to mimic the scipydirect repo for inclusion into scipy (MIT license). The code is unfortunately a f2py port of the original Fortran implementation which has hard coded bounds on the number of function evaluations (90.000) and iterations (6.000). Any opinions on this?
This sounds like a good idea. Would you mind opening a GitHub issue with the feature request, so we keep track of this? Contacting the original author about this would also be useful; if the author would like to upstream their code, that'd be a good outcome. Re hard coded bounds, I assume those can be removed again without too much trouble.
I personally am very impressed by biteopt's performance, and although it ranked very high in other global optimization benchmarks there is no formal paper on it yet. I understand the scipy guidelines in a way that such a paper is a requisite for inclusion into scipy.
Well, we do have the very extensive benchmarks - if it does indeed perform better than what we have, then of course we'd be happy to add a new algorithm even if it doesn't have a paper. We use papers and the citations it has as an indication of usefulness only; anything that outperforms our existing code is clearly useful. Cheers, Ralf
Best,
Daniel
Daniel
On Sun, 17 Jan 2021 at 14:33, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Stefan,
You’re most welcome :-) . I’m happy the experts in the community are
commenting and suggesting things, and constructive criticism is also always welcome.
On Sun, 17 Jan 2021 at 12.11, Stefan Endres <stefan.c.endres@gmail.com>
Dear Andrea,
Thank you very much for this detailed analysis. I don't think I've seen
such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions:
Many of the commercial algorithms offer free licenses for benchmarking
wrote: problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use.
I’m definitely willing to include those commercial algorithms. The test
suite per se is almost completely automated, so it’s not that complicated to add one or more solvers. I’m generally more inclined in testing open source algorithms but there’s nothing stopping the inclusion of commercial ones.
I welcome any suggestions related to commercial solvers, as long as they
can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy).
The collection of test suites you've garnered could be immensely useful
In this case I would also like to volunteer to run some of the
commercial solvers on the benchmark suite.
It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ).
I’m still sorting out all the implications related to a potential paper with my employer, but as far as I can see there shouldn’t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source.
Comments on shgo:
High RAM use in higher dimensions:
In the higher dimensions the new simplicial sampling can be used (not
An exception to this "rule" is when special grey box information such
as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or
In the symmetry case SHGO can solve these by supplying the `symmetry`
I have had a few reports of memory leaks from various users. I have
spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully,
for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. pre-computed bounds on the Lipschitz constants is known. option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). this will happen in the summer semester of 2021.
To be honest I wouldn’t be so concerned in general: SHGO is an excellent
global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe.
Andrea.
Thank you again for compiling this large set of benchmark results.
Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com>
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a
bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my
This new set of benchmarks pretty much superseeds - and greatly
expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark
test suites as possible: most of them are characterized by test function generators, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
This whole exercise is made up of 6,825 test problems divided across
16 different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks,
A couple of "new" optimization algorithms I have ported to Python:
MCS: Multilevel Coordinate Search, it’s my translation to Python of
BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master
Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit, I have improved it even more - in my opinion.
BasinHopping: Basin hopping is a random algorithm which attempts to
find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution.
BiteOpt: BITmask Evolution OPTimization, based on the algorithm
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor
modifications.
CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on the
following algorithm:
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for
CRS2: Controlled Random Search with Local Mutation, as implemented in
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S...
DE: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen...
DIRECT: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm)
DualAnnealing: the Dual Annealing algorithm, taken directly from the
SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann...
LeapFrog: the Leap Frog procedure, which I have been recommended for
use, taken from:
https://github.com/flythereddflagg/lpfgopt
MCS: Multilevel Coordinate Search, it’s my translation to Python of
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original
implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion.
PSWARM: Particle Swarm optimization algorithm, it has been described
in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/
SCE: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global
optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz via
a personal e-mail.
SHGO: Simplicial Homology Global Optimization, taken directly from the
SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
SciPy Extended: 235 multivariate problems (where the number of
independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks
and fixed a few bugs in the existing benchmark models in the SciPy repository.
GKLS: 1,500 test functions, with dimensionality varying from 2 to 6,
generated with the super famous GKLS Test Functions Generator. I have taken
GlobOpt: 288 tough problems, with dimensionality varying from 2 to 5,
created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
Many thanks go to Professor Marco Locatelli for providing an updated
copy of the C++ source code.
MMTFG: sort-of an acronym for “Multi-Modal Test Function with multiple
Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython.
GOTPY: a generator of benchmark functions using the Bocharov-Feldbaum
“Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng .
Huygens: this benchmark suite is very different from the rest, as it
uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py, then generating 600 2-dimensional test problems out of it.
LGMVG: not sure about the meaning of the acronym, but the
implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5.
NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a
test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab
MPM2: Implementing the “Multiple Peaks Model 2”, there is a Python
implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5.
RandomFields: as described in
https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself.
NIST: not exactly the realm of Global Optimization solvers, but the
NIST StRD dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit,
GlobalLib: Arnold Neumaier maintains a suite of test problems termed
“COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code ( http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C
The global minima are taken from Sahinidis or from Neumaier or refined
using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9.
CVMG: another “landscape generator”, I had to dig it out using the
Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5.
NLSE: again, not really the realm of Global optimization solvers, but
Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications, ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8.
Schoen: based on the early work of Fabio Schoen and his short note on
a simple but interesting idea on a test function generator, I have taken
Many thanks go to Professor Fabio Schoen for providing an updated copy
of the source code and for the email communications.
Robust: the last benchmark test suite for this exercise, it is
actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on
wrote: previous exercise in 2013. the overall grand total number of functions evaluations stands at 3,859,786,025 - close to 4 billion. Not bad. the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. presented in this GitHub link: the algorithm) the NLOpt package: the implementation as it stands in SciPy: the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ): the original C code (available at http://netlib.org/toms/) and converted it to Python. script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. thus creating 27 test problems with dimensionality ranging from 2 to 9. parser to convert the benchmarks from C to Python. the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea.
_______________________________________________
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering)
Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/ _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Hey Ralf, discussion for DIRECT was opened here: https://github.com/scipy/scipy/issues/14121 and the scipydirect maintainer was pinged: https://github.com/andim/scipydirect/issues/9 . About the general discussion regarding global optimization in SciPy: I am working on a SciPy style API for NLopt in my free time (mostly missing documentation at the moment) and would like to benchmark its MLSL and StoGO algorithms using the SciPy benchmarks. Currently, this seems very complicated as any new algorithm would have to be included into SciPy first. Is there a way to circumvent this? Of course, Andrea's benchmark suite would be the mother of all benchmarks if available ;). Cheers, Daniel On Sun, 25 Apr 2021 at 22:32, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Hi Daniel,
Sorry for the extremely delayed reply.
On Thu, Mar 11, 2021 at 10:19 AM Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey all,
after Andrea's suggestions, I did an extensive github search and found several global optimization python libraries which mimic the scipy API, so that a user only has to change the import statements. Could it be useful to add a page in the documentation of these?
This does sound useful. Probably not a whole page, more like a note in the `minimize` docstring with references I'd think.
Non exhaustive list:
DIRECT: https://github.com/andim/scipydirect DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy PSO: https://github.com/jerrytheo/psopy Powell's derivative free optimizers: https://www.pdfo.net/index.html
As DIRECT was very competitive on some of Andrea's benchmarks, it could be useful to mimic the scipydirect repo for inclusion into scipy (MIT license). The code is unfortunately a f2py port of the original Fortran implementation which has hard coded bounds on the number of function evaluations (90.000) and iterations (6.000). Any opinions on this?
This sounds like a good idea. Would you mind opening a GitHub issue with the feature request, so we keep track of this? Contacting the original author about this would also be useful; if the author would like to upstream their code, that'd be a good outcome.
Re hard coded bounds, I assume those can be removed again without too much trouble.
I personally am very impressed by biteopt's performance, and although it ranked very high in other global optimization benchmarks there is no formal paper on it yet. I understand the scipy guidelines in a way that such a paper is a requisite for inclusion into scipy.
Well, we do have the very extensive benchmarks - if it does indeed perform better than what we have, then of course we'd be happy to add a new algorithm even if it doesn't have a paper. We use papers and the citations it has as an indication of usefulness only; anything that outperforms our existing code is clearly useful.
Cheers, Ralf
Best,
Daniel
Daniel
On Sun, 17 Jan 2021 at 14:33, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Stefan,
You’re most welcome :-) . I’m happy the experts in the community
are commenting and suggesting things, and constructive criticism is also always welcome.
On Sun, 17 Jan 2021 at 12.11, Stefan Endres <stefan.c.endres@gmail.com>
Dear Andrea,
Thank you very much for this detailed analysis. I don't think I've
seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions:
Many of the commercial algorithms offer free licenses for benchmarking
wrote: problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use.
I’m definitely willing to include those commercial algorithms. The test
suite per se is almost completely automated, so it’s not that complicated to add one or more solvers. I’m generally more inclined in testing open source algorithms but there’s nothing stopping the inclusion of commercial ones.
I welcome any suggestions related to commercial solvers, as long as
they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy).
The collection of test suites you've garnered could be immensely
In this case I would also like to volunteer to run some of the
commercial solvers on the benchmark suite.
It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ).
I’m still sorting out all the implications related to a potential paper with my employer, but as far as I can see there shouldn’t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source.
Comments on shgo:
High RAM use in higher dimensions:
In the higher dimensions the new simplicial sampling can be used (not
An exception to this "rule" is when special grey box information such
as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or
In the symmetry case SHGO can solve these by supplying the `symmetry`
I have had a few reports of memory leaks from various users. I have
spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully,
useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. pre-computed bounds on the Lipschitz constants is known. option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). this will happen in the summer semester of 2021.
To be honest I wouldn’t be so concerned in general: SHGO is an
excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe.
Andrea.
Thank you again for compiling this large set of benchmark results.
Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <andrea.gavana@gmail.com>
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a
bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my
This new set of benchmarks pretty much superseeds - and greatly
expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark
test suites as possible: most of them are characterized by test function generators, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
This whole exercise is made up of 6,825 test problems divided across
16 different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks,
A couple of "new" optimization algorithms I have ported to Python:
MCS: Multilevel Coordinate Search, it’s my translation to Python of
BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master
Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit, I have improved it even more - in my opinion.
BasinHopping: Basin hopping is a random algorithm which attempts to
find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution.
BiteOpt: BITmask Evolution OPTimization, based on the algorithm
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor
modifications.
CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on the
following algorithm:
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code
for the algorithm)
CRS2: Controlled Random Search with Local Mutation, as implemented in
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S...
DE: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen...
DIRECT: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm)
DualAnnealing: the Dual Annealing algorithm, taken directly from the
SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann...
LeapFrog: the Leap Frog procedure, which I have been recommended for
use, taken from:
https://github.com/flythereddflagg/lpfgopt
MCS: Multilevel Coordinate Search, it’s my translation to Python of
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original
implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion.
PSWARM: Particle Swarm optimization algorithm, it has been described
in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/
SCE: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global
optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz
via a personal e-mail.
SHGO: Simplicial Homology Global Optimization, taken directly from
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
SciPy Extended: 235 multivariate problems (where the number of
independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks
and fixed a few bugs in the existing benchmark models in the SciPy repository.
GKLS: 1,500 test functions, with dimensionality varying from 2 to 6,
generated with the super famous GKLS Test Functions Generator. I have taken
GlobOpt: 288 tough problems, with dimensionality varying from 2 to 5,
created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
Many thanks go to Professor Marco Locatelli for providing an updated
copy of the C++ source code.
MMTFG: sort-of an acronym for “Multi-Modal Test Function with
multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython.
GOTPY: a generator of benchmark functions using the Bocharov-Feldbaum
“Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng .
Huygens: this benchmark suite is very different from the rest, as it
uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py, then generating 600 2-dimensional test problems out of it.
LGMVG: not sure about the meaning of the acronym, but the
implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5.
NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a
test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab
MPM2: Implementing the “Multiple Peaks Model 2”, there is a Python
implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5.
RandomFields: as described in
https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself.
NIST: not exactly the realm of Global Optimization solvers, but the
NIST StRD dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit,
GlobalLib: Arnold Neumaier maintains a suite of test problems termed
“COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code ( http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C
The global minima are taken from Sahinidis or from Neumaier or
refined using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9.
CVMG: another “landscape generator”, I had to dig it out using the
Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5.
NLSE: again, not really the realm of Global optimization solvers, but
Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications, ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8.
Schoen: based on the early work of Fabio Schoen and his short note on
a simple but interesting idea on a test function generator, I have taken
Many thanks go to Professor Fabio Schoen for providing an updated
copy of the source code and for the email communications.
Robust: the last benchmark test suite for this exercise, it is
actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on
wrote: previous exercise in 2013. the overall grand total number of functions evaluations stands at 3,859,786,025 - close to 4 billion. Not bad. the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. presented in this GitHub link: the NLOpt package: the implementation as it stands in SciPy: the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ): the SciPy implementation: the original C code (available at http://netlib.org/toms/) and converted it to Python. script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. thus creating 27 test problems with dimensionality ranging from 2 to 9. parser to convert the benchmarks from C to Python. the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea.
_______________________________________________
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering)
Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/ _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Hi Daniel, On Mon, 24 May 2021 at 09.23, Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey Ralf,
discussion for DIRECT was opened here: https://github.com/scipy/scipy/issues/14121 and the scipydirect maintainer was pinged: https://github.com/andim/scipydirect/issues/9 .
About the general discussion regarding global optimization in SciPy: I am working on a SciPy style API for NLopt in my free time (mostly missing documentation at the moment) and would like to benchmark its MLSL and StoGO algorithms using the SciPy benchmarks. Currently, this seems very complicated as any new algorithm would have to be included into SciPy first. Is there a way to circumvent this? Of course, Andrea's benchmark suite would be the mother of all benchmarks if available ;).
I think it sounds like an very good idea to design a SciPy stile API for NLOpt. In my personal experience the two algorithms you mention (MLSL and StoGo) are generally less performant compared to the other ones available in NLOpt. That said, assuming enough interest was there and some spare time, I could easily rerun the whole benchmarks with those two methods as well. Andrea.
Cheers, Daniel
On Sun, 25 Apr 2021 at 22:32, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Hi Daniel,
Sorry for the extremely delayed reply.
On Thu, Mar 11, 2021 at 10:19 AM Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey all,
after Andrea's suggestions, I did an extensive github search and found several global optimization python libraries which mimic the scipy API, so that a user only has to change the import statements. Could it be useful to add a page in the documentation of these?
This does sound useful. Probably not a whole page, more like a note in the `minimize` docstring with references I'd think.
Non exhaustive list:
DIRECT: https://github.com/andim/scipydirect DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy PSO: https://github.com/jerrytheo/psopy Powell's derivative free optimizers: https://www.pdfo.net/index.html
As DIRECT was very competitive on some of Andrea's benchmarks, it could be useful to mimic the scipydirect repo for inclusion into scipy (MIT license). The code is unfortunately a f2py port of the original Fortran implementation which has hard coded bounds on the number of function evaluations (90.000) and iterations (6.000). Any opinions on this?
This sounds like a good idea. Would you mind opening a GitHub issue with the feature request, so we keep track of this? Contacting the original author about this would also be useful; if the author would like to upstream their code, that'd be a good outcome.
Re hard coded bounds, I assume those can be removed again without too much trouble.
I personally am very impressed by biteopt's performance, and although it ranked very high in other global optimization benchmarks there is no formal paper on it yet. I understand the scipy guidelines in a way that such a paper is a requisite for inclusion into scipy.
Well, we do have the very extensive benchmarks - if it does indeed perform better than what we have, then of course we'd be happy to add a new algorithm even if it doesn't have a paper. We use papers and the citations it has as an indication of usefulness only; anything that outperforms our existing code is clearly useful.
Cheers, Ralf
Best,
Daniel
Daniel
On Sun, 17 Jan 2021 at 14:33, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Stefan,
You’re most welcome :-) . I’m happy the experts in the community
are commenting and suggesting things, and constructive criticism is also always welcome.
On Sun, 17 Jan 2021 at 12.11, Stefan Endres <stefan.c.endres@gmail.com>
Dear Andrea,
Thank you very much for this detailed analysis. I don't think I've
seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions:
Many of the commercial algorithms offer free licenses for
benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use.
I’m definitely willing to include those commercial algorithms. The test suite per se is almost completely automated, so it’s not that complicated to add one or more solvers. I’m generally more inclined in testing open source algorithms but there’s nothing stopping the inclusion of commercial ones.
I welcome any suggestions related to commercial solvers, as long as
wrote: they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy).
The collection of test suites you've garnered could be immensely
In this case I would also like to volunteer to run some of the
commercial solvers on the benchmark suite.
It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ).
I’m still sorting out all the implications related to a potential
useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? paper with my employer, but as far as I can see there shouldn’t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source.
Comments on shgo:
High RAM use in higher dimensions:
In the higher dimensions the new simplicial sampling can be used (not
An exception to this "rule" is when special grey box information such
as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or
In the symmetry case SHGO can solve these by supplying the `symmetry`
I have had a few reports of memory leaks from various users. I have
spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully,
pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. pre-computed bounds on the Lipschitz constants is known. option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). this will happen in the summer semester of 2021.
To be honest I wouldn’t be so concerned in general: SHGO is an
excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe.
Andrea.
Thank you again for compiling this large set of benchmark results.
Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <
Dear SciPy Developers & Users,
long time no see :-) . I thought to start 2021 with a bit of a
bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my
This new set of benchmarks pretty much superseeds - and greatly
expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ .
The approach I have taken this time is to select as many benchmark
test suites as possible: most of them are characterized by test function generators, from which we can actually create almost an unlimited number of unique test problems. Biggest news are:
This whole exercise is made up of 6,825 test problems divided across
16 different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks,
A couple of "new" optimization algorithms I have ported to Python:
MCS: Multilevel Coordinate Search, it’s my translation to Python of
BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications.
Enough chatting for now. The 13 tested algorithms are described here:
http://infinity77.net/go_2021/
High level description & results of the 16 benchmarks:
http://infinity77.net/go_2021/thebenchmarks.html
Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities.
List of tested algorithms:
AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20...
I have added a few improvements here and there based on my Master
Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit, I have improved it even more - in my opinion.
BasinHopping: Basin hopping is a random algorithm which attempts to
find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales:
http://www-wales.ch.cam.ac.uk/
BasinHopping is now part of the standard SciPy distribution.
BiteOpt: BITmask Evolution OPTimization, based on the algorithm
https://github.com/avaneev/biteopt
I have converted the C++ code into Python and added a few, minor
modifications.
CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on
http://www.lri.fr/~hansen/cmaesintro.html
http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code
for the algorithm)
CRS2: Controlled Random Search with Local Mutation, as implemented
in the NLOpt package:
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S...
DE: Differential Evolution, described in the following page:
http://www1.icsi.berkeley.edu/~storn/code.html
DE is now part of the standard SciPy distribution, and I have taken
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen...
DIRECT: the DIviding RECTangles procedure, described in:
https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi...
http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm)
DualAnnealing: the Dual Annealing algorithm, taken directly from the
SciPy implementation:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann...
LeapFrog: the Leap Frog procedure, which I have been recommended for
use, taken from:
https://github.com/flythereddflagg/lpfgopt
MCS: Multilevel Coordinate Search, it’s my translation to Python of
https://www.mat.univie.ac.at/~neum/software/mcs/
I have added a few, minor improvements compared to the original
implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion.
PSWARM: Particle Swarm optimization algorithm, it has been described
in many online papers. I have used a compiled version of the C source code from:
http://www.norg.uminho.pt/aivaz/pswarm/
SCE: Shuffled Complex Evolution, described in:
Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient
global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992.
The version I used was graciously made available by Matthias Cuntz
via a personal e-mail.
SHGO: Simplicial Homology Global Optimization, taken directly from
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm...
List of benchmark test suites:
SciPy Extended: 235 multivariate problems (where the number of
independent variables ranges from 2 to 17), again with multiple local/global minima.
I have added about 40 new functions to the standard SciPy benchmarks
and fixed a few bugs in the existing benchmark models in the SciPy repository.
GKLS: 1,500 test functions, with dimensionality varying from 2 to 6,
generated with the super famous GKLS Test Functions Generator. I have taken
GlobOpt: 288 tough problems, with dimensionality varying from 2 to
5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython.
Many thanks go to Professor Marco Locatelli for providing an updated
copy of the C++ source code.
MMTFG: sort-of an acronym for “Multi-Modal Test Function with
multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython.
GOTPY: a generator of benchmark functions using the
Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime.
Original paper from
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng .
Huygens: this benchmark suite is very different from the rest, as it
uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py, then generating 600 2-dimensional test problems out of it.
LGMVG: not sure about the meaning of the acronym, but the
implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5.
NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a
test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab
MPM2: Implementing the “Multiple Peaks Model 2”, there is a Python
implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5.
RandomFields: as described in
https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself.
NIST: not exactly the realm of Global Optimization solvers, but the
NIST StRD dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit,
GlobalLib: Arnold Neumaier maintains a suite of test problems termed
“COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code ( http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C
The global minima are taken from Sahinidis or from Neumaier or
refined using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9.
CVMG: another “landscape generator”, I had to dig it out using the
Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5.
NLSE: again, not really the realm of Global optimization solvers,
but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications, ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8.
Schoen: based on the early work of Fabio Schoen and his short note
on a simple but interesting idea on a test function generator, I have taken
Many thanks go to Professor Fabio Schoen for providing an updated
copy of the source code and for the email communications.
Robust: the last benchmark test suite for this exercise, it is
actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on
andrea.gavana@gmail.com> wrote: previous exercise in 2013. the overall grand total number of functions evaluations stands at 3,859,786,025 - close to 4 billion. Not bad. the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. presented in this GitHub link: the following algorithm: the implementation as it stands in SciPy: the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ): the SciPy implementation: the original C code (available at http://netlib.org/toms/) and converted it to Python. script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. thus creating 27 test problems with dimensionality ranging from 2 to 9. parser to convert the benchmarks from C to Python. the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
Enjoy, and Happy 2021 :-) .
Andrea.
_______________________________________________
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering)
Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany <https://www.google.com/maps/search/Badgasteiner+Stra%C3%9Fe+3,+28359+Bremen,+Germany?entry=gmail&source=g> Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/ _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
On Mon, May 24, 2021 at 9:36 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Daniel,
On Mon, 24 May 2021 at 09.23, Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey Ralf,
discussion for DIRECT was opened here: https://github.com/scipy/scipy/issues/14121 and the scipydirect maintainer was pinged: https://github.com/andim/scipydirect/issues/9 .
Thanks! I commented on the issue about having checked the license info and what I think needs to be done.
About the general discussion regarding global optimization in SciPy: I am working on a SciPy style API for NLopt in my free time (mostly missing documentation at the moment) and would like to benchmark its MLSL and StoGO algorithms using the SciPy benchmarks. Currently, this seems very complicated as any new algorithm would have to be included into SciPy first. Is there a way to circumvent this? Of course, Andrea's benchmark suite would be the mother of all benchmarks if available ;).
I think it sounds like an very good idea to design a SciPy stile API for NLOpt.
This seems valuable for LGPL'd algorithms that we cannot include directly. Having an optional dependency also has maintenance and CI costs, so for BSD/MIT-licensed algorithms I'd say we prefer to vendor them rather than have a scikit-umfpack like optional dependency. Unless there's really a ton of code - but if it's one or two algorithms, then vendoring seems like the way to go. Cheers, Ralf
In my personal experience the two algorithms you mention (MLSL and StoGo) are generally less performant compared to the other ones available in NLOpt.
That said, assuming enough interest was there and some spare time, I could easily rerun the whole benchmarks with those two methods as well.
Andrea.
Cheers, Daniel
On Sun, 25 Apr 2021 at 22:32, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Hi Daniel,
Sorry for the extremely delayed reply.
On Thu, Mar 11, 2021 at 10:19 AM Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey all,
after Andrea's suggestions, I did an extensive github search and found several global optimization python libraries which mimic the scipy API, so that a user only has to change the import statements. Could it be useful to add a page in the documentation of these?
This does sound useful. Probably not a whole page, more like a note in the `minimize` docstring with references I'd think.
Non exhaustive list:
DIRECT: https://github.com/andim/scipydirect DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy PSO: https://github.com/jerrytheo/psopy Powell's derivative free optimizers: https://www.pdfo.net/index.html
As DIRECT was very competitive on some of Andrea's benchmarks, it could be useful to mimic the scipydirect repo for inclusion into scipy (MIT license). The code is unfortunately a f2py port of the original Fortran implementation which has hard coded bounds on the number of function evaluations (90.000) and iterations (6.000). Any opinions on this?
This sounds like a good idea. Would you mind opening a GitHub issue with the feature request, so we keep track of this? Contacting the original author about this would also be useful; if the author would like to upstream their code, that'd be a good outcome.
Re hard coded bounds, I assume those can be removed again without too much trouble.
I personally am very impressed by biteopt's performance, and although it ranked very high in other global optimization benchmarks there is no formal paper on it yet. I understand the scipy guidelines in a way that such a paper is a requisite for inclusion into scipy.
Well, we do have the very extensive benchmarks - if it does indeed perform better than what we have, then of course we'd be happy to add a new algorithm even if it doesn't have a paper. We use papers and the citations it has as an indication of usefulness only; anything that outperforms our existing code is clearly useful.
Cheers, Ralf
Best,
Daniel
Daniel
On Sun, 17 Jan 2021 at 14:33, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Stefan,
You’re most welcome :-) . I’m happy the experts in the community
are commenting and suggesting things, and constructive criticism is also always welcome.
On Sun, 17 Jan 2021 at 12.11, Stefan Endres <
Dear Andrea,
Thank you very much for this detailed analysis. I don't think I've
seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions:
Many of the commercial algorithms offer free licenses for
benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use.
I’m definitely willing to include those commercial algorithms. The test suite per se is almost completely automated, so it’s not that complicated to add one or more solvers. I’m generally more inclined in testing open source algorithms but there’s nothing stopping the inclusion of commercial ones.
I welcome any suggestions related to commercial solvers, as long as
stefan.c.endres@gmail.com> wrote: they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy).
The collection of test suites you've garnered could be immensely
In this case I would also like to volunteer to run some of the
commercial solvers on the benchmark suite.
It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ).
I’m still sorting out all the implications related to a potential
useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? paper with my employer, but as far as I can see there shouldn’t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source.
Comments on shgo:
High RAM use in higher dimensions:
In the higher dimensions the new simplicial sampling can be used
An exception to this "rule" is when special grey box information
such as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or
In the symmetry case SHGO can solve these by supplying the
`symmetry` option (which was used in the previous benchmarks done by me for
I have had a few reports of memory leaks from various users. I have
spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully,
(not pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. pre-computed bounds on the Lipschitz constants is known. the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). this will happen in the summer semester of 2021.
To be honest I wouldn’t be so concerned in general: SHGO is an
excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe.
Andrea.
Thank you again for compiling this large set of benchmark results.
Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <
> > Dear SciPy Developers & Users, > > long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my
> > This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ . > > The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function generators, from which we can actually create almost an unlimited number of unique test problems. Biggest news are: > > This whole exercise is made up of 6,825 test problems divided across 16 different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on
> A couple of "new" optimization algorithms I have ported to Python: > > MCS: Multilevel Coordinate Search, it’s my translation to Python of
> BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications. > > > Enough chatting for now. The 13 tested algorithms are described here: > > http://infinity77.net/go_2021/ > > High level description & results of the 16 benchmarks: > > http://infinity77.net/go_2021/thebenchmarks.html > > Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities. > > List of tested algorithms: > > AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here: > > http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20... > > I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit, I have improved it even more - in my opinion. > > BasinHopping: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales: > > http://www-wales.ch.cam.ac.uk/ > > BasinHopping is now part of the standard SciPy distribution. > > BiteOpt: BITmask Evolution OPTimization, based on the algorithm
> > https://github.com/avaneev/biteopt > > I have converted the C++ code into Python and added a few, minor modifications. > > CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on
> > http://www.lri.fr/~hansen/cmaesintro.html > > http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) > > CRS2: Controlled Random Search with Local Mutation, as implemented in the NLOpt package: > > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S... > > DE: Differential Evolution, described in the following page: > > http://www1.icsi.berkeley.edu/~storn/code.html > > DE is now part of the standard SciPy distribution, and I have taken
> > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen... > > DIRECT: the DIviding RECTangles procedure, described in: > > https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi... > > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) > > DualAnnealing: the Dual Annealing algorithm, taken directly from
> > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann... > > LeapFrog: the Leap Frog procedure, which I have been recommended for use, taken from: > > https://github.com/flythereddflagg/lpfgopt > > MCS: Multilevel Coordinate Search, it’s my translation to Python of
> > https://www.mat.univie.ac.at/~neum/software/mcs/ > > I have added a few, minor improvements compared to the original implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion. > > PSWARM: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C
> > http://www.norg.uminho.pt/aivaz/pswarm/ > > SCE: Shuffled Complex Evolution, described in: > > Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992. > > The version I used was graciously made available by Matthias Cuntz via a personal e-mail. > > SHGO: Simplicial Homology Global Optimization, taken directly from
> > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm... > > > List of benchmark test suites: > > SciPy Extended: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima. > > I have added about 40 new functions to the standard SciPy benchmarks and fixed a few bugs in the existing benchmark models in the SciPy repository. > > GKLS: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. > > GlobOpt: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython. > > Many thanks go to Professor Marco Locatelli for providing an updated copy of the C++ source code. > > MMTFG: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. > > GOTPY: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime. > > Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . > > Huygens: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py, then generating 600 2-dimensional test problems out of it. > > LGMVG: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. > > NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab
> > MPM2: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken
> > RandomFields: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. > > NIST: not exactly the realm of Global Optimization solvers, but the NIST StRD dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit,
> > GlobalLib: Arnold Neumaier maintains a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code ( http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C
> > The global minima are taken from Sahinidis or from Neumaier or refined using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. > > CVMG: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. > > NLSE: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications, ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. > > Schoen: based on the early work of Fabio Schoen and his short note on a simple but interesting idea on a test function generator, I have taken
> > Many thanks go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. > > Robust: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on
andrea.gavana@gmail.com> wrote: previous exercise in 2013. those benchmarks, the overall grand total number of functions evaluations stands at 3,859,786,025 - close to 4 billion. Not bad. the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. presented in this GitHub link: the following algorithm: the implementation as it stands in SciPy: the SciPy implementation: the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ): source code from: the SciPy implementation: script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. thus creating 27 test problems with dimensionality ranging from 2 to 9. parser to convert the benchmarks from C to Python. the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
> > > Enjoy, and Happy 2021 :-) . > > > Andrea. > > _______________________________________________ > > > SciPy-Dev mailing list > SciPy-Dev@python.org > https://mail.python.org/mailman/listinfo/scipy-dev
-- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering)
Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany <https://www.google.com/maps/search/Badgasteiner+Stra%C3%9Fe+3,+28359+Bremen,+Germany?entry=gmail&source=g> Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres@iwt.uni-bremen.de Website: https://stefan-endres.github.io/ _______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
Thanks to this discussion there is now already a PR by Gagandeep Singh to add DIRECT (via the original Fortran implementation) to scipy: https://github.com/scipy/scipy/pull/14300 As I thought we might want to exploit the momentum built up here, I also pinged the developer of the Python wrapper for biteopt (MIT license) if he would be interested to help to add it to scipy: https://github.com/leonidk/biteopt/pull/1 Another solver which gained some popularity (for example part of lmfit) and outperformed scipy's solvers for certain types of problems in Andrea's benchmark is AMPGO. If there was a license for it, I would also volunteer to add it to scipy (realistic timeframe: until end of the year). Cheers, Daniel On Tue, 25 May 2021 at 11:11, Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Mon, May 24, 2021 at 9:36 AM Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Daniel,
On Mon, 24 May 2021 at 09.23, Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey Ralf,
discussion for DIRECT was opened here: https://github.com/scipy/scipy/issues/14121 and the scipydirect maintainer was pinged: https://github.com/andim/scipydirect/issues/9 .
Thanks! I commented on the issue about having checked the license info and what I think needs to be done.
About the general discussion regarding global optimization in SciPy: I am working on a SciPy style API for NLopt in my free time (mostly missing documentation at the moment) and would like to benchmark its MLSL and StoGO algorithms using the SciPy benchmarks. Currently, this seems very complicated as any new algorithm would have to be included into SciPy first. Is there a way to circumvent this? Of course, Andrea's benchmark suite would be the mother of all benchmarks if available ;).
I think it sounds like an very good idea to design a SciPy stile API for NLOpt.
This seems valuable for LGPL'd algorithms that we cannot include directly. Having an optional dependency also has maintenance and CI costs, so for BSD/MIT-licensed algorithms I'd say we prefer to vendor them rather than have a scikit-umfpack like optional dependency. Unless there's really a ton of code - but if it's one or two algorithms, then vendoring seems like the way to go.
Cheers, Ralf
In my personal experience the two algorithms you mention (MLSL and StoGo) are generally less performant compared to the other ones available in NLOpt.
That said, assuming enough interest was there and some spare time, I could easily rerun the whole benchmarks with those two methods as well.
Andrea.
Cheers, Daniel
On Sun, 25 Apr 2021 at 22:32, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Hi Daniel,
Sorry for the extremely delayed reply.
On Thu, Mar 11, 2021 at 10:19 AM Daniel Schmitz < danielschmitzsiegen@googlemail.com> wrote:
Hey all,
after Andrea's suggestions, I did an extensive github search and found several global optimization python libraries which mimic the scipy API, so that a user only has to change the import statements. Could it be useful to add a page in the documentation of these?
This does sound useful. Probably not a whole page, more like a note in the `minimize` docstring with references I'd think.
Non exhaustive list:
DIRECT: https://github.com/andim/scipydirect DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy PSO: https://github.com/jerrytheo/psopy Powell's derivative free optimizers: https://www.pdfo.net/index.html
As DIRECT was very competitive on some of Andrea's benchmarks, it could be useful to mimic the scipydirect repo for inclusion into scipy (MIT license). The code is unfortunately a f2py port of the original Fortran implementation which has hard coded bounds on the number of function evaluations (90.000) and iterations (6.000). Any opinions on this?
This sounds like a good idea. Would you mind opening a GitHub issue with the feature request, so we keep track of this? Contacting the original author about this would also be useful; if the author would like to upstream their code, that'd be a good outcome.
Re hard coded bounds, I assume those can be removed again without too much trouble.
I personally am very impressed by biteopt's performance, and although it ranked very high in other global optimization benchmarks there is no formal paper on it yet. I understand the scipy guidelines in a way that such a paper is a requisite for inclusion into scipy.
Well, we do have the very extensive benchmarks - if it does indeed perform better than what we have, then of course we'd be happy to add a new algorithm even if it doesn't have a paper. We use papers and the citations it has as an indication of usefulness only; anything that outperforms our existing code is clearly useful.
Cheers, Ralf
Best,
Daniel
Daniel
On Sun, 17 Jan 2021 at 14:33, Andrea Gavana <andrea.gavana@gmail.com> wrote:
Hi Stefan,
You’re most welcome :-) . I’m happy the experts in the community
are commenting and suggesting things, and constructive criticism is also always welcome.
On Sun, 17 Jan 2021 at 12.11, Stefan Endres <
> > Dear Andrea, > > Thank you very much for this detailed analysis. I don't think I've seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions: > > Many of the commercial algorithms offer free licenses for benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use.
I’m definitely willing to include those commercial algorithms. The test suite per se is almost completely automated, so it’s not that complicated to add one or more solvers. I’m generally more inclined in testing open source algorithms but there’s nothing stopping the inclusion of commercial ones.
I welcome any suggestions related to commercial solvers, as long as
stefan.c.endres@gmail.com> wrote: they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy).
> The collection of test suites you've garnered could be immensely
> > In this case I would also like to volunteer to run some of the commercial solvers on the benchmark suite. > It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ).
I’m still sorting out all the implications related to a potential
useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? paper with my employer, but as far as I can see there shouldn’t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source.
> > Comments on shgo: > > High RAM use in higher dimensions: > > In the higher dimensions the new simplicial sampling can be used
> > An exception to this "rule" is when special grey box information such as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or
> > In the symmetry case SHGO can solve these by supplying the `symmetry` option (which was used in the previous benchmarks done by me for
> > I have had a few reports of memory leaks from various users. I have spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully,
(not pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. pre-computed bounds on the Lipschitz constants is known. the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). this will happen in the summer semester of 2021.
To be honest I wouldn’t be so concerned in general: SHGO is an
excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe.
Andrea.
> Thank you again for compiling this large set of benchmark results. > > Best regards, > Stefan > On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <
>> >> Dear SciPy Developers & Users, >> >> long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my
>> >> This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ . >> >> The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function generators, from which we can actually create almost an unlimited number of unique test problems. Biggest news are: >> >> This whole exercise is made up of 6,825 test problems divided across 16 different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on
>> A couple of "new" optimization algorithms I have ported to Python: >> >> MCS: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to
>> BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications. >> >> >> Enough chatting for now. The 13 tested algorithms are described here: >> >> http://infinity77.net/go_2021/ >> >> High level description & results of the 16 benchmarks: >> >> http://infinity77.net/go_2021/thebenchmarks.html >> >> Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities. >> >> List of tested algorithms: >> >> AMPGO: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here: >> >> http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20... >> >> I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit, I have improved it even more - in my opinion. >> >> BasinHopping: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales: >> >> http://www-wales.ch.cam.ac.uk/ >> >> BasinHopping is now part of the standard SciPy distribution. >> >> BiteOpt: BITmask Evolution OPTimization, based on the algorithm
>> >> https://github.com/avaneev/biteopt >> >> I have converted the C++ code into Python and added a few, minor modifications. >> >> CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on
>> >> http://www.lri.fr/~hansen/cmaesintro.html >> >> http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) >> >> CRS2: Controlled Random Search with Local Mutation, as implemented in the NLOpt package: >> >> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_S... >> >> DE: Differential Evolution, described in the following page: >> >> http://www1.icsi.berkeley.edu/~storn/code.html >> >> DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy: >> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differen... >> >> DIRECT: the DIviding RECTangles procedure, described in: >> >> https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLi... >> >> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) >> >> DualAnnealing: the Dual Annealing algorithm, taken directly from
>> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_ann... >> >> LeapFrog: the Leap Frog procedure, which I have been recommended for use, taken from: >> >> https://github.com/flythereddflagg/lpfgopt >> >> MCS: Multilevel Coordinate Search, it’s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ): >> >> https://www.mat.univie.ac.at/~neum/software/mcs/ >> >> I have added a few, minor improvements compared to the original implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion. >> >> PSWARM: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C
>> >> http://www.norg.uminho.pt/aivaz/pswarm/ >> >> SCE: Shuffled Complex Evolution, described in: >> >> Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992. >> >> The version I used was graciously made available by Matthias Cuntz via a personal e-mail. >> >> SHGO: Simplicial Homology Global Optimization, taken directly from
>> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.htm... >> >> >> List of benchmark test suites: >> >> SciPy Extended: 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima. >> >> I have added about 40 new functions to the standard SciPy benchmarks and fixed a few bugs in the existing benchmark models in the SciPy repository. >> >> GKLS: 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator. I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. >> >> GlobOpt: 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named “GlobOpt”: https://www.researchgate.net/publication/225566516_A_new_class_of_test_funct... . The original code is in C++ and I have bridged it to Python using Cython. >> >> Many thanks go to Professor Marco Locatelli for providing an updated copy of the C++ source code. >> >> MMTFG: sort-of an acronym for “Multi-Modal Test Function with multiple Global minima”, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimoda... . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. >> >> GOTPY: a generator of benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime. >> >> Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . >> >> Huygens: this benchmark suite is very different from the rest, as it uses a “fractal” approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions. The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py, then generating 600 2-dimensional test problems out of it. >> >> LGMVG: not sure about the meaning of the acronym, but the implementation follows the “Max-Set of Gaussians Landscape Generator” described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it’s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. >> >> NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it’s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab
>> >> MPM2: Implementing the “Multiple Peaks Model 2”, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken
>> >> RandomFields: as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_... , it generates benchmark functions by “smoothing” one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. >> >> NIST: not exactly the realm of Global Optimization solvers, but
>> >> GlobalLib: Arnold Neumaier maintains a suite of test problems termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code ( http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C
>> >> The global minima are taken from Sahinidis or from Neumaier or refined using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. >> >> CVMG: another “landscape generator”, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/m... , the acronym stands for “Continuous Valued Multimodality Generator”. Source code is in C++ but it’s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. >> >> NLSE: again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications, ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. >> >> Schoen: based on the early work of Fabio Schoen and his short note on a simple but interesting idea on a test function generator, I have taken
>> >> Many thanks go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. >> >> Robust: the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on
andrea.gavana@gmail.com> wrote: previous exercise in 2013. those benchmarks, the overall grand total number of functions evaluations stands at 3,859,786,025 - close to 4 billion. Not bad. the original implementation. presented in this GitHub link: the following algorithm: the SciPy implementation: source code from: the SciPy implementation: script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. the NIST StRD dataset can be used to generate a single objective function as “sum of squares”. I have used the NIST dataset as implemented in lmfit, thus creating 27 test problems with dimensionality ranging from 2 to 9. parser to convert the benchmarks from C to Python. the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6.
>> >> >> Enjoy, and Happy 2021 :-) . >> >> >> Andrea. >> >> _______________________________________________ >> >> >> SciPy-Dev mailing list >> SciPy-Dev@python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > > > > -- > Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering) > > Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany <https://www.google.com/maps/search/Badgasteiner+Stra%C3%9Fe+3,+28359+Bremen,+Germany?entry=gmail&source=g> > Work phone (DE): +49 (0) 421 218 51238 > Cellphone (DE): +49 (0) 160 949 86417 > Cellphone (ZA): +27 (0) 82 972 42 89 > E-mail (work): s.endres@iwt.uni-bremen.de > Website: https://stefan-endres.github.io/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev@python.org > https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
_______________________________________________ SciPy-Dev mailing list SciPy-Dev@python.org https://mail.python.org/mailman/listinfo/scipy-dev
participants (6)
-
Andrea Gavana -
Andrew Nelson -
Daniel Schmitz -
Hans Dembinski -
Ralf Gommers -
Stefan Endres