[Numpy-discussion] [ANN] Constrained optimization solver with guaranteed precision

Dmitrey tmp50 at ukr.net
Mon Aug 15 16:09:37 EDT 2011


 Hi Andrea,
   I believe benchmarks should be like Hans Mittelman do (
   http://plato.asu.edu/bench.html ) and of course number of funcs
   evaluations matters when slow Python code vs compiled is tested, but
   my current work doesn't allow me to spend so much time for OpenOpt
   development, so, moreover, for auxiliary work such as benchmarking
   (and making it properly like that). Also, benchmarks of someone's own
   soft usually are not very  trustful, moreover, on his own probs.

   BTW, please don't reply on my posts in scipy mail lists - I use them
   only to post the announcements like this and can miss a reply.

   Regards, D.

   --- Исходное сообщение ---
   От кого: " Andrea Gavana" <andrea.gavana at gmail.com>
   Кому: " Discussion of Numerical Python" <numpy-discussion at scipy.org>
   Дата: 15 августа 2011, 23:01:05
   Тема: Re: [Numpy-discussion] [ANN] Constrained optimization solver
   with guaranteed precision



     Hi Dmitrey,
     
     2011/8/15 Dmitrey <     tmp50 at ukr.net     >:
     > Hi all,
     > I'm glad to inform you that general constraints handling for interalg (free
     > solver with guaranteed user-defined precision) now is available. Despite it
     > is very premature and requires lots of improvements, it is already capable
     > of outperforming commercial BARON (example:
     >      http://openopt.org/interalg_bench#Test_4     )  and thus you could be interested
     > in trying it right now (next OpenOpt release will be no sooner than 1
     > month).
     >
     > interalg can be especially more effective than BARON (and some other
     > competitors) on problems with huge or absent Lipschitz constant, for example
     > on funcs like sqrt(x), log(x), 1/x, x**alpha, alpha<1, when domain of x is
     > something like [small_positive_value, another_value].
     >
     > Let me also remember you that interalg can search for all solutions of
     > nonlinear equations / systems of them where local solvers like
     > scipy.optimize fsolve cannot find anyone, and search single/multiple
     > integral with guaranteed user-defined precision (speed of integration is
     > intended to be enhanced in future).
     > However, only FuncDesigner models are handled (read interalg webpage for
     > more details).
     
     Thank you for this new improvements. I am one of those who use OpenOpt
     in real life problems, and if I can advance a suggestion (for the
     second time), when you post a benchmark of various optimization
     methods, please do not consider the "elapsed time" only as a
     meaningful variable to measure a success/failure of an algorithm.
     
     Some (most?) of real life problems require intensive and time
     consuming simulations for every *function evaluation*; the time spent
     by the solver itself doing its calculations simply disappears in front
     of the real process simulation. I know it because our simulations take
     between 2 and 48 hours to run, so what's 300 seconds more or less in
     the solver calculations? If you talk about synthetic problems (such as
     the ones defined by a formula), I can see your point. For everything
     else, I believe the number of function evaluations is a more direct
     way to assess the quality of an optimization algorithm.
     
     Just my 2c.
     
     Andrea.
     
   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20110815/e67a2920/attachment.html>


More information about the NumPy-Discussion mailing list