<div dir="ltr">I'm using the Hessian to calculate the covariance matrix for parameter estimates in least squares, i.e. the equivalent of `pcov` in `curve_fit` (I don't want to do a fit, I just want the covariance around the current location).</div><div class="gmail_extra"><br><div class="gmail_quote">On 29 March 2018 at 03:05, <span dir="ltr"><<a href="mailto:josef.pktd@gmail.com" target="_blank">josef.pktd@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Mon, Mar 26, 2018 at 7:57 PM, Andrew Nelson <<a href="mailto:andyfaff@gmail.com">andyfaff@gmail.com</a>> wrote:<br>
> I would like to calculate the Jacobian for a least squares problem, followed<br>
> by a Hessian estimation, then the covariance matrix from that Hessian.<br>
><br>
> With my current approach I sometimes experience issues with the covariance<br>
> matrix in that it's sometimes not positive semi-definite. I am using the<br>
> covariance matrix to seed a MCMC sampling process by supplying it to<br>
> `np.random.multivariate_<wbr>normal` to get initial positions for the MC chain.<br>
<br>
</span>I never looked much at the details of MCMC.<br>
But if your data or starting point doesn't provide good information about the<br>
Hessian, then, I think, you could shrink the hessian to or combine it with the<br>
prior covariance matrix, e.g. use a weighted average.<br>
<span class="HOEnZb"><font color="#888888"><br>
Josef<br>
</font></span><span class="im HOEnZb"><br>
<br>
 I<br>
> am using the following code:<br>
><br>
> ```<br>
> from scipy.optimize._numdiff import approx_derivative<br>
> jac = approx_derivative(residuals_<wbr>func, x0)<br>
> hess = np.matmul(jac.T, jac)<br>
> covar = np.linalg.inv(hess)<br>
> ```<br>
><br>
> Note that x0 may not be at a minimum.<br>
><br>
> - would this be the usual way of estimating the Hessian, is there anything<br>
> incorrect with the approach?<br>
> - what is the recommended way (i.e. numerically stable) of inverting the<br>
> Hessian in such a situation?<br>
> - does `optimize.leastsq` do anything different?<br>
> - if `x0` is not at a minimum should the covariance matrix be expected to be<br>
> positive semi-definite anyway?<br>
><br>
</span><div class="HOEnZb"><div class="h5">> ______________________________<wbr>_________________<br>
> SciPy-User mailing list<br>
> <a href="mailto:SciPy-User@python.org">SciPy-User@python.org</a><br>
> <a href="https://mail.python.org/mailman/listinfo/scipy-user" rel="noreferrer" target="_blank">https://mail.python.org/<wbr>mailman/listinfo/scipy-user</a><br>
><br>
______________________________<wbr>_________________<br>
SciPy-User mailing list<br>
<a href="mailto:SciPy-User@python.org">SciPy-User@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/scipy-user" rel="noreferrer" target="_blank">https://mail.python.org/<wbr>mailman/listinfo/scipy-user</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">_____________________________________<br>Dr. Andrew Nelson<br><br><br>_____________________________________</div>
</div>