![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Sat, Jul 22, 2017 at 1:41 AM, ROSA Benoit <b.rosa@unistra.fr> wrote:
Hi,
Your reply comes just at the right time, since I had a bit of time to work on this today :)
I tried to setup the benchmarks for this too, but I get some problems when running them.
Long story short, the benchmark fails to run, throwing an error from the ASV package (see https://pastebin.com/0tnN2hQE for exact details). I don't have any experience with this, so it's quite hard for me to debug it. More interestingly, it also fails on a fresh clone from the main scipy repository.
What I tried:
python runtests.py --bench optimize.BenchGlobal --> fails on both my modified version (to add stochasticBB testing) and on the original scipy repository (errors described in the pastebin above)
python runtests.py --bench optimize.BenchLeastSquares --> works flawlessly
python runtests.py --bench optimize.BenchSmoothUnbounded --> works flawlessly
Is there something I missed, or the benchmarking pipeline for the global optimization algorithms is broken at the moment ?
No, looks like something indeed got broken recently. I've filed an issue here: https://github.com/scipy/scipy/issues/7658. The issue description contains a simple fix to make things work - probably not the optimal one, but using that locally should get you started on running your new benchmarks. Cheers, Ralf