say, who's responsible for the anneal portion of optimize?  I'd like to check in a minor tweak which implements simple upper and bounds on the fit parameters.

Thanks,
William

On 4/18/07, Matthieu Brucher <matthieu.brucher@gmail.com> wrote:
Hi,

I'm lauching a new thread, the last was pretty big, and as I almost put every advice in this proposal, I thought it would be better.
First, I used scipy coding standard, I hope I didn't forget something.

I do not know where it would be put at the moment on my scipy tree, and the tests are visual for the moment, I have to make them automatic, but I do not know the framework used by scipy, I have to check it first.

So, the proposal :
- combining several objects to make an optimizer
- a function should be an object defining the __call__ method and graient, hessian, ... if needed. It can be passed as several separate functions as Alan suggested it, a new object is then created
- an optimizer is a combination of a function, a step_kind, a line_search, a criterion and a starting point x0.
- the result of the optimization is return after a call to the optimize() method
- every object (step or line_search) saves its modification in a state variable in the optimizer. This variable can be accessed if needed after the optimization.
- after each iteration, a record function is called with this state variable - it is a dict, BTW -, if you want to save the whole dict, don't forget to copy it, as it is modified during the optimization

For the moment are implemented :
- a standard algorithm, only calls step_kind then line_search for a new candidate - the next optimizer would be one that calls a modifying function on the computed result, that can be useful in some cases -
- criteria :
 - monotony criterion : the cost is decreasing - a factor can be used to allow an error -
 - relative value criterion : the relative value error is higher than a fixed error
 - absolute value criterion : the same with the absolute error
- step :
 - gradient step
 - Newton step
 - Fletcher-Reeves conjugate gradient step - other conjugate gradient will be available -
- line search :
 - no line search, just take the step
 - damped search, it's an inexact line search, that searches in the step direction a set of parameters than decreases the cost by dividing by two the step size while the cost is not decreasing
 - Golden section search
 - Fibonacci search

I'm not pulling other criterion, step or line search, as my time is finite when doing a structural change.

There are 3 classic optimization test functions in the package, Rosenbrock, Powell and a quadratic function, feel free to try them. Sometimes, the optimizer converges to the true minimum, sometimes it does not, I tried to propose several solutions to show that every combinaison does not manage to find the minimum.

Matthieu

_______________________________________________
Scipy-dev mailing list
Scipy-dev@scipy.org
http://projects.scipy.org/mailman/listinfo/scipy-dev