[SciPy-user] number of function evaluation for leastsq
Achim Gaedke
Achim.Gaedke at physik.tu-darmstadt.de
Wed Apr 16 09:32:32 EDT 2008
Anne Archibald wrote:
> On 15/04/2008, Achim Gaedke <Achim.Gaedke at physik.tu-darmstadt.de> wrote:
>
>> While observing the approximation process I found that the first 3 runs
>> were always with the same parameters. First I thought, the parameter
>> variation for gradient approximation is too tiny for a simple print
>> command. Later I found out, that these three runs were independent of
>> the number of fit parameters.
>>
>> A closer look to the code reveals the reason (svn dir trunk/scipy/optimize):
>>
>> 1st call is to check with python code wether the function is valid
>>
>> line 265 of minpack.py
>> m = check_func(func,x0,args,n)[0]
>>
>> 2nd call is to get the right amount of memory for paramters.
>>
>> line 449 of __minpack.h
>> ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args,
>> 1, minpack_error);
>>
>> 3rd call is from inside the fortran algorithm (the essential one!)
>>
>> Unfortunately that behaviour is not described and I would eagerly demand
>> to avoid the superficial calls to the function.
>>
>
> This is annoying, and it should be fixed inside scipy if possible; the
> FORTRAN code will make this more difficult, but file a bug on the
> scipy Trac and we'll look at it. In the meantime, you can use
> "memoizing" to avoid recomputing your function. See
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/498110
> for a bells-and-whistles implementation, but the basic idea is just
> that you wrap your function in a wrapper that stores a dictionary
> mapping inputs to outputs. Then every time you call the function it
> checks whether the function has been called before with these values,
> and if so, returns the value computed before.
>
I am testing different physics models whether they comply with the
measurement. So I can not optimize each model before testing in detail,
I could not even simplify the model, because I want to investigate the
contribution of different effects.
I am not really terrified by 1.5h for each data point. But I have the
opportunity to compare the logs of each run.
I've already written a stub to avoid unneceassary evaluation.
So my intention was to notify people that the behaviour is not as
expected. There are parameters for function evaluation limits. They
might be wrong, because they count only fortran calls.
Thanks for all the answers, Achim
More information about the SciPy-User
mailing list