On Nov 28, 2007 12:59 AM, Stefan van der Walt <stefan@sun.ac.za> wrote:
On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote:
> This is not a trivial problem, as you can see by googling mixed integer least
> squares (MILS). Much will depend on the nature of the parameters, the number of
> variables you are using in the fit, and how exact the solution needs to be. One
> approach would be to start by rounding the coefficients that must be integer
> and improve the solution using annealing or genetic algorithms to jig the
> integer coefficients while fitting the remainder in the usual least square way,
> but that wouldn't have the elegance of some of the specific methods used for
> this sort of problem. However, I don't know of a package in scipy that
> implements those more sophisticated algorithms, perhaps someone else on this
> list who knows more about these things than I can point you in the right
> direction.

Would this be a good candidate for a genetic algorithm?  I haven't
used GA before, so I don't know the typical rate of convergence or its
applicability to optimization problems.

Regards
Stéfan
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion



If the number of terms is not huge and the function is well behaved; it might be worth trying the following simple and stupid approach:
  1. Find the floating point minimum.
  2. for each set of possible set of integer coefficients near the FP minimum:
    1. Solve for the floating point coefficients with the integer coefficients fixed.
    2. If the minimum is the best so far, stash it somewhere for later.
  3. Return the best set of coefficients.


--
.  __
.   |-\
.
.  tim.hochberg@ieee.org