[SciPy-User] lmfit: confidence intervals issue
Antonino Ingargiola
tritemio at gmail.com
Sat Jun 21 20:46:32 EDT 2014
Hi Matt,
On Sat, Jun 21, 2014 at 5:39 AM, Matt Newville <newville at cars.uchicago.edu>
wrote:
> Hi Antonio,
>
>
> On Fri, Jun 20, 2014 at 6:36 PM, Antonino Ingargiola <tritemio at gmail.com>
> wrote:
>
>> Hi Matt,
>>
>>
>> On Fri, Jun 20, 2014 at 2:12 PM, Matt Newville <
>> newville at cars.uchicago.edu> wrote:
>> [cut]
>>
>>> Thanks for the explanation, now I understand much better the problem.
>>>> I have a model function with a discontinuity in the origin (i.e. exp(-x -
>>>> x0) for x > x0 else 0). If I sample it with a step dx, I will always have a
>>>> problem when x0 changes less than dx. Is there any known trick I can use to
>>>> avoid this problem?
>>>>
>>>
>>> I'm not sure that there is a robust way to have the range of data
>>> considered in the fitting to be a parameter. I might suggest (but haven't
>>> looked at your situation in great detail) to have you consider using an
>>> "offset" that shifts the origin for the model, then interpolate that onto
>>> the grid of data. That way, you might be able to set a fit range in the
>>> data coordinates before the fit, and not have it change. The model can be
>>> shifted in "x" (assuming there is such a thing -- your data appears to have
>>> an obvious "x" axis), but is fitting a fixed data range. Again, I'm not
>>> sure that would fix all problems, but it might help.
>>>
>>
>> Unless I misunderstood, I do already what you suggest. My function is
>> exp(- x + x0) (note that I wrote "-x0" before by mistake) and x0 is
>> "continuous", regardless of the x discretization. The problem is that the
>> function is 0 for x < x0 and therefore there is a discontinuity at x=x0.
>> When the function is evaluated on the save discrete x arrays, changing
>> smoothly x0 does not result in a smooth translation of the function.
>>
>
> Ah, sorry, I see that now. I think I must have missed the use of
> "offset" in "np.exp(-(x[pos_range] - offset)/tau)" earlier. Yes, I
> think that should work, unless I'm missing something else....
>
> Still, the main issue is getting NaNs in the covariance matrix for the
> "ampl" and "offset" parameters, which is especially strange since the
> resulting fit with best-fit values looks pretty reasonable. You might
> try temporarily simplifying the model (turn weights on/off, turn
> convolution step on/off) and/or printing values in the residual or model
> function to see if you can figure out what conditions cause that to happen.
>
>
I suspect that the pseudo-periodic behaviour or the residuals as a function
of the offset caused by the time axis discretization is causing problems
here. BTW how can I see the covariance matrix?
> A different (untested) guess: exponential decays are often surprisingly
> difficult for leastsq(). You might try a different algorithm (say,
> Nelder-Mead) as a first pass, then use those results as starting values for
> leastsq() (as that will estimate uncertainties). I hear people who have
> good success with this approach when fitting noisy exponentially decaying
> data.
>
Oh, well, that's a very good suggestion. In the few trials I did
Nelder-Mead works, then leastsq does not move the solution anymore but I
can get the CI. Neat.
As I told you the original error is gone when updating to current master.
However I find some new combinations of data and initial parameters that
give errors when computing the CI. The errors are:
AttributeError: 'int' object has no attribute 'copy'
and
ValueError: f(a) and f(b) must have different signs
I opened an issue to track them:
https://github.com/lmfit/lmfit-py/issues/91
Best,
Antonio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20140621/2c8ead99/attachment.html>
More information about the SciPy-User
mailing list