How can I constrain linear_least_squares to integer solutions?
Hi, I have successfully used LinearAlgebra.linear_least_squares to estimate solutions to continuous functions. The coefficients returned in that case are of course floating-point values. Now I have a problem where some of the terms are continuous but some must be constrained to integer multiples. As a simple example, y = c1 * x^2 + c0 * x where c1 is floating-point in the usual way, but c0 can only be integer-valued. (The actual problem has many more terms.) Can anyone tell me how to constrain some of the coefficients to integer values? Thank you for any insight. Mark
Hi,
I have successfully used LinearAlgebra.linear_least_squares to estimate solutions to continuous functions. The coefficients returned in that case are of course floating-point values.
Now I have a problem where some of the terms are continuous but some must be constrained to integer multiples. As a simple example,
y = c1 * x^2 + c0 * x
where c1 is floating-point in the usual way, but c0 can only be integer-valued. (The actual problem has many more terms.)
Can anyone tell me how to constrain some of the coefficients to integer values?
This is not a trivial problem, as you can see by googling mixed integer least squares (MILS). Much will depend on the nature of the parameters, the number of variables you are using in the fit, and how exact the solution needs to be. One approach would be to start by rounding the coefficients
On Nov 27, 2007 9:51 PM, Mark Schmucker <Mark.Schmucker@4dtechnology.com> wrote: that must be integer and improve the solution using annealing or genetic algorithms to jig the integer coefficients while fitting the remainder in the usual least square way, but that wouldn't have the elegance of some of the specific methods used for this sort of problem. However, I don't know of a package in scipy that implements those more sophisticated algorithms, perhaps someone else on this list who knows more about these things than I can point you in the right direction. Chuck
On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote:
This is not a trivial problem, as you can see by googling mixed integer least squares (MILS). Much will depend on the nature of the parameters, the number of variables you are using in the fit, and how exact the solution needs to be. One approach would be to start by rounding the coefficients that must be integer and improve the solution using annealing or genetic algorithms to jig the integer coefficients while fitting the remainder in the usual least square way, but that wouldn't have the elegance of some of the specific methods used for this sort of problem. However, I don't know of a package in scipy that implements those more sophisticated algorithms, perhaps someone else on this list who knows more about these things than I can point you in the right direction.
Would this be a good candidate for a genetic algorithm? I haven't used GA before, so I don't know the typical rate of convergence or its applicability to optimization problems. Regards Stéfan
Hello everyone, I am trying to sum points onto a 3d grid using "put" with a list of computed array indices. I am looking for a version of "put" which increments, e.g. replace: for i in ind: a.flat[i] = v[i] with "+=" to increment. Other operations might also be useful? I tried using "take", then incrementing the result, but still have problems if the indices are not unique. Thanks for any suggestions, Jon
On Nov 28, 2007 12:59 AM, Stefan van der Walt <stefan@sun.ac.za> wrote:
This is not a trivial problem, as you can see by googling mixed integer least squares (MILS). Much will depend on the nature of the parameters, the number of variables you are using in the fit, and how exact the solution needs to be. One approach would be to start by rounding the coefficients that must be integer and improve the solution using annealing or genetic algorithms to jig
integer coefficients while fitting the remainder in the usual least square way, but that wouldn't have the elegance of some of the specific methods used for this sort of problem. However, I don't know of a package in scipy that implements those more sophisticated algorithms, perhaps someone else on
On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote: the this
list who knows more about these things than I can point you in the right direction.
Would this be a good candidate for a genetic algorithm? I haven't used GA before, so I don't know the typical rate of convergence or its applicability to optimization problems.
Regards Stéfan _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
If the number of terms is not huge and the function is well behaved; it might be worth trying the following simple and stupid approach: 1. Find the floating point minimum. 2. for each set of possible set of integer coefficients near the FP minimum: 1. Solve for the floating point coefficients with the integer coefficients fixed. 2. If the minimum is the best so far, stash it somewhere for later. 3. Return the best set of coefficients. -- . __ . |-\ . . tim.hochberg@ieee.org
On Nov 28, 2007 12:59 AM, Stefan van der Walt <stefan@sun.ac.za> wrote:
This is not a trivial problem, as you can see by googling mixed integer least squares (MILS). Much will depend on the nature of the parameters, the number of variables you are using in the fit, and how exact the solution needs to be. One approach would be to start by rounding the coefficients that must be integer and improve the solution using annealing or genetic algorithms to jig
integer coefficients while fitting the remainder in the usual least square way, but that wouldn't have the elegance of some of the specific methods used for this sort of problem. However, I don't know of a package in scipy that implements those more sophisticated algorithms, perhaps someone else on
On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote: the this
list who knows more about these things than I can point you in the right direction.
Would this be a good candidate for a genetic algorithm? I haven't used GA before, so I don't know the typical rate of convergence or its applicability to optimization problems.
It depends. Just to show the sort of problems involved, suppose you have 32 integer variables and are looking for the last bit of optimization. If the floating point optimum is at (.5, .5, ...., .5) and the error is symmetrical, then each vertex of the surrounding integer cube is a solution and there are 2**32 of them. If the error isn't symmetrical, and chances are that with that many variables it is very far from that, then you have to search a larger region. That's a lot of points. The more sophisticated algorithms try to eliminate whole regions of points and keep narrowing things down, but even so the problem can easily get out of hand. If you just need a good solution, a genetic algorithm is a good bet to find one without too much hassle. I had a similar problem in designing a digital min/max FIR filter where I needed 15 bit integer coefficients for hardware implementation. There was a narrow high rejection band in the filter and simply rounding the coefficients left spikes in the response through that band. With a GA I was able to eliminate the spikes in about 30 minutes of evolution using a python/Numeric program. In that case the performance of annealing was quite dependent on choosing the right parameters for cooling rate, etc., while the GA was quite robust and straight forward. There was no guarantee the I ended up with the best solution, but what I got was good enough. Chuck
participants (5)
-
Charles R Harris
-
Jon Wright
-
Mark Schmucker
-
Stefan van der Walt
-
Timothy Hochberg