[SciPy-user] constrained optimization

dmitrey dmitrey.kroshko at scipy.org
Tue Apr 29 04:04:48 EDT 2008


Robert Kern wrote:
> On Mon, Apr 28, 2008 at 3:03 PM, Ondrej Certik <ondrej at certik.cz> wrote:
>   
>> On Mon, Apr 28, 2008 at 8:57 PM, Robert Kern <robert.kern at gmail.com> wrote:
>>  > On Mon, Apr 28, 2008 at 1:34 PM, John Hunter <jdh2358 at gmail.com> wrote:
>>  >  > I need to do a N dimensional constrained optimization over a weight w
>>  >  >  vector with the constraints:
>>  >  >
>>  >  >   * w[i] >=0
>>  >  >
>>  >  >   * w.sum() == 1.0
>>  >  >
>>  >  >  Scanning through the scipy.optimize docs, I see a number of examples
>>  >  >  where parameters can be bounded by a bracketing interval, but none
>>  >  >  where constraints can be placed on combinations of the parameters, eg
>>  >  >  the sum of them.  One approach I am considering is doing a bracketed
>>  >  >  [0,1] constrained optimization over N-1 weights (assigning the last
>>  >  >  weight to be 1-sum others) and modifying my cost function to punish
>>  >  >  the optimizer when the N-1 input weights sum  to more than one.
>>  >  >
>>  >  >  Is there a better approach?
>>  >
>>  >  Transform the coordinates to an unconstrained N-1-dimensional space.
>>  >  One such transformation is the Aitchison (or "additive log-ratio")
>>  >  transform:
>>  >
>>  >   y = log(x[:-1] / x[-1])
>>  >
>>  >  And to go back:
>>  >
>>  >   tmp = hstack([exp(y), 1.0])
>>  >   x = tmp / tmp.sum()
>>  >
>>  >  Searching for "compositional data analysis" should yield similar
>>  >  transformations, but this one should be sufficient for maintaining
>>  >  constraints. For doing statistics, the other have better properties.
>>
>>  Wow, that is very clever. Just today I was thinking how to do it and
>>  it didn't occur to me I should read scipy-user. :)
>>
>>  The exp/log transform is clear, but I didn't figure out that in order
>>  to maintain
>>  the norm, I can maintain it in the last element, so it's enough to do:
>>
>>  y = x[:-1]/x[-1]
>>
>>  tmp = hstack([y, 1.0])
>>  x = tmp / tmp.sum()
>>
>>  Very cool, thanks. However, the transform is not one to one, e.g. both
>>
>>  x = [1, 2, 1, 4]
>>  x = [2, 4, 2, 8]
>>
>>  represent the same thing:
>>
>>  y = [0.25, 0.5, 0.25]
>>     
>
> Yes, that is by design. With compositional data, only the ratios
> between components matter. They are unique only up to a scaling
> factor, and typically, you normalize them such that they sum to 1.
>
>   
I guess the real problems for using this and/or almost any other 
coordinate transformation could be possible yielding ill-condition 
problem (or increasing ill-condition, if former problem is already 
ill-conditioned), for example when coord x[-1] is close to zero either 
in optimal point or during moving along trajectory. This makes 
optimization solvers to work unstable, unpredictable and maybe stop far 
from optimum.
Regards, D.



More information about the SciPy-User mailing list