[Tutor] constructing semi-arbitrary functions

"André Walker-Loud <walksloud@gmail.com>" walksloud at gmail.com
Wed Feb 19 20:46:42 CET 2014


Hi Oscar,

>> Is there a benefit to this method vs a standard linear least squares?
> 
> It's the same except that you're using an analytic solution rather
> than a black box solver.

OK.  Interesting.  I usually construct the analytic solution by just differentiating the chi^2, which sets up, I am guessing, a similar matrix equation.  When I solve with chi^2, I see clearly how to put the uncertainties in, and use them for estimating the uncertainties on the coefficients.  The parameter uncertainties are simply related to the double-derrivatives of the chi^2 for linear least squares.  But it is not clear to me if knowing the Vandermonde matrix automatically gives these parameter uncertainties as well.

>> The most common problem I am solving is fitting a sum of real exponentials to noisy data, with the model function
>> 
>> C(t) = sum_n A_n e^{- E_n t}
>> 
>> the quantities of most interest are E_n, followed by A_n so I solve this with non-linear regression.
>> To stabilize the fit, I usually do a linear-least squares for the A_n first, solving as a function of the E_n, and then do a non-linear fit for the E_n.
> 
> That doesn't sound optimal to me. Maybe I've misunderstood but fitting
> over one variable and then over another is not in any way guaranteed
> to produce an optimal fit.

I have not been precise enough.
The first solve for A_n is using an analytic linear-least squares algorithm, so it is gauranteed to be correct.  The solution will depend upon the E_n, but in a known way.

The point is that the numerical minimization of sums of exponentials is a hard problem.  So why ask the numerical minimizer to do all the extra work of also minimizing the coefficients?  This can make it very unstable.  So by first applying the analytic linear-least squares to the coefficients, the numerical minimization over the smaller set of E_n is more likely to converge.  Usually, in my cases, we have of order 2-10 times more A_n than E_n, as we really solve a matrix

C_{ij}(t) = sum_n A^n_{ij} exp(-E_n t)

the linear least squares gives

A^n_{ij} = analytically solvable matrix function (E_n)

> Well it sounds like your approach so far is working for now but as I
> say the real fix is to improve or bypass the interface you're using.
> One limitation that you may at some point hit is that in Python you
> can't have an unbounded number of formal parameters for a function:
> 
> $ python3 tmp2.py
>  File "tmp2.py", line 1
>    def f(x0,  x1,  x2,  x3,  x4,  x5,  x6,  x7,  x8,  x9,  x10,  x11,
> x12,  x13,  x14,
>         ^
> SyntaxError: more than 255 arguments

In my work, it is unlikely I will ever need to minimize in so many parameters.
I’ll worry about that when I run into it.

Thanks for the info.

Andre


More information about the Tutor mailing list