[Numpy-discussion] N dimensional dichotomy optimization

Sebastian Walter sebastian.walter at gmail.com
Tue Nov 23 08:47:10 EST 2010


On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux
<gael.varoquaux at normalesup.org> wrote:
> On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
>> >> min_x f(x)
>> >> s.t.   lo <= Ax + b <= up
>> >>            0 = g(x)
>> >>            0 <= h(x)
>
>> > No constraints.
>
>> didn't you say that you operate only in some convex hull?
>
> No. I have an initial guess that allows me to specify a convex hull in
> which the minimum should probably lie, but its not a constraint: nothing
> bad happens if I leave that convex hull.
>
>> > Either in R^n, in the set of integers (unidimensional), or in the set of
>> > positive integers.
>> According to  http://openopt.org/Problems
>> this is a mixed integer nonlinear program http://openopt.org/MINLP .
>
> It is indead the name I know for it, however I have additional hypothesis
> (namely that f is roughly convex) which makes it much easier.
>
>> I don't have experience with the solver though, but it may take a long
>> time to run it since it uses branch-and-bound.
>
> Yes, this is too brutal: this is for non convex optimization.
> Dichotomy seems well-suited for finding an optimum on the set of
> intehers.
>
>> In my field of work we typically relax the integers to real numbers,
>> perform the optimization and then round to the next integer.
>> This is often sufficiently close a good solution.
>
> This is pretty much what I am doing, but you have to be careful: if the
> algorithm does jumps that are smaller than 1, it gets a zero difference
> between those jumps. If you are not careful, this might confuse a lot the
> algorithm and trick it into not converging.
>

ah, that clears things up a lot.

Well, I don't know what the best method is to solve your problem, so
take the following with a grain of salt:
Wouldn't it be better to change the model than modifying the
optimization algorithm?
It sounds as if the resulting objective function is piecewise
constant. AFAIK most optimization algorithms for continuous problems
require at least Lipschitz continuous functions to work ''acceptable
well''. Not sure if this is also true for Nelder-Mead.


> Thanks for your advice,
>
> Gaël
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>



More information about the NumPy-Discussion mailing list