leastsq - When to scale covariance matrix by reduced chi square for confidence interval estimation
![](https://secure.gravatar.com/avatar/b8d81b24585a7a1a05dfd4f4205d64d5.jpg?s=120&d=mm&r=g)
Hi List, I'm trying to get my head around, when to scale the covariance matrix by the reduced chi square of the problem for getting an estimate of the error of a parameter obtained via fitting. I'm kinda stuck and would appreciate any pointers to an answer. From the documentation of scipy.optimize.leastsq and scipy.curve_fit, as well as from some old threads on this mailing list [1, 2] it seems the procedure in scipy is the following 1) Create estimate of the Hessian matrix based on the Jacobian at the final value, which is called the covariance matrix cov_x and is the curvature of the fitting parameters around the minimum 2) Calculate the reduced chi square, red_chi_2 = sum( (w_i *(y_i - f_i))**2 ) / dof, where w_i are the weights, y_i the data, f_i the fit and dof the degrees of freedom (number of knobs) 3) Get parameter estimates by calculating sqrt(diag(cov_x) * red_chi_2) 4) Scale confidence interval by appropriate value of student t distribution, e.g. when predicting the standard deviation of a single value just *= 1 So far, so good. However in the literature [3, 4] I often find that steps 2 and 3 are skipped when the data is weighted by errors in the individual observation. Obviously for a good fit with red_chi_2 = 1 both ways of getting an error are the same. [3] and [4] caution that the method they are using assume among others normality and a reduced chi square of about 1 and discourage the use of estimating the error in the fit for bad fits. However it seems that the method currently in scipy somehow is more robust. Take for example data similiar to the one I am currently working with [5]. The fit has a reduced chi square of about one, and hence the errors of both the scipy method and the literature method agree. If I make my reduced chi square worse by scaling the error bars, the method in the literature gives either very, very small errors or very, very large ones. The scipy method however always produces about the same error estimate. Here is the output of [5] Errors scaled by: 1 Offset: -71.0 Chi 2: 19.1752 Red. Chi 2: 1.0092 Error literature 2.8426 Error scipy 2.8557 Errors scaled by: 100.0 Offset: -71.0077 Chi 2: 0.0019 Red. Chi 2: 0.0001 Error literature 283.9153 Error scipy 2.8563 Errors scaled by: 0.01 Offset: -69.0724 Chi 2: 9797.5838 Red. Chi 2: 515.6623 Error literature 0.1235 Error scipy 2.8036 Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful. Thanks a lot, Markus P.S. On a related side remark: What happened to the benchmark tests against NIST certified non linear regression data sets mentioned in [1]. Are they still in the code base? And are there similiar benchmark data sets for non linear regression with observations that have error bars on them? [1] http://thread.gmane.org/gmane.comp.python.scientific.user/19482<http://article.gmane.org/gmane.comp.python.scientific.user/19482> [2] http://thread.gmane.org/gmane.comp.python.scientific.user/31023/focus=31024 [3] Numerical Recipes in C, 2nd edition, chapter 15.6 , especially p. 695-698 [4] Data Reduction and Error Analysis for the physical sciences, 3rd edition, chapter 8.5, p. 157 [5] Gist of self-contained example at https://gist.github.com/2848439
![](https://secure.gravatar.com/avatar/aecfaa98df819a602f8a1405a9a4a7b0.jpg?s=120&d=mm&r=g)
Am 1.6.2012 um 05:50 schrieb Markus Baden:
Hi List,
I'm trying to get my head around, when to scale the covariance matrix by the reduced chi square of the problem for getting an estimate of the error of a parameter obtained via fitting. I'm kinda stuck and would appreciate any pointers to an answer. From the documentation of scipy.optimize.leastsq and scipy.curve_fit, as well as from some old threads on this mailing list [1, 2] it seems the procedure in scipy is the following
1) Create estimate of the Hessian matrix based on the Jacobian at the final value, which is called the covariance matrix cov_x and is the curvature of the fitting parameters around the minimum
2) Calculate the reduced chi square, red_chi_2 = sum( (w_i *(y_i - f_i))**2 ) / dof, where w_i are the weights, y_i the data, f_i the fit and dof the degrees of freedom (number of knobs)
3) Get parameter estimates by calculating sqrt(diag(cov_x) * red_chi_2)
4) Scale confidence interval by appropriate value of student t distribution, e.g. when predicting the standard deviation of a single value just *= 1
So far, so good. However in the literature [3, 4] I often find that steps 2 and 3 are skipped when the data is weighted by errors in the individual observation. Obviously for a good fit with red_chi_2 = 1 both ways of getting an error are the same. [3] and [4] caution that the method they are using assume among others normality and a reduced chi square of about 1 and discourage the use of estimating the error in the fit for bad fits. However it seems that the method currently in scipy somehow is more robust. Take for example data similiar to the one I am currently working with [5]. The fit has a reduced chi square of about one, and hence the errors of both the scipy method and the literature method agree. If I make my reduced chi square worse by scaling the error bars, the method in the literature gives either very, very small errors or very, very large ones. The scipy method however always produces about the same error estimate. Here is the output of [5]
If you have knowledge about the statistical errors of your data, then skipping step 2 and 3 is the recommended, and you can use the chi square to assess the validity of the fit and your assumptions about the errors. On the other hand, if you have insufficient knowledge about the errors, you can use the reduced chi square as an estimate for the variance of your data (at least under the assumption that the error is the same for all data points). This is the idea behind steps 2 and 3.
Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful.
I didn't look at your data, but I guess that these values of the reduced chi square are still in range such that they are not a significant deviation from the expected value of one. The chi-squared distribution is rather broad. So I would omit steps 2 and 3. Only if you have good reasons not to trust your assumptions about the errors of the data, then apply steps 2 and 3. Gregor
![](https://secure.gravatar.com/avatar/b8d81b24585a7a1a05dfd4f4205d64d5.jpg?s=120&d=mm&r=g)
Hi Gregor, Thanks for the fast reply. If you have knowledge about the statistical errors of your data, then
skipping step 2 and 3 is the recommended, and you can use the chi square to assess the validity of the fit and your assumptions about the errors. On the other hand, if you have insufficient knowledge about the errors, you can use the reduced chi square as an estimate for the variance of your data (at least under the assumption that the error is the same for all data points). This is the idea behind steps 2 and 3.
I just want to get that straight: So basically in the case where I either don't have errors, or I don't trust them, multiplying the covariance by the reduced chi square amounts to "normalizing" the covariance such that the fit would have a chi square of one (?). Maybe your point could go into the docs for curve_fit... or there could be a comment about standard procedure a bit like in origin ( http://www.originlab.com/www/support/resultstech.aspx?ID=452&language=English&Version=All )
Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful.
I didn't look at your data, but I guess that these values of the reduced chi square are still in range such that they are not a significant deviation from the expected value of one. The chi-squared distribution is rather broad. So I would omit steps 2 and 3. Only if you have good reasons not to trust your assumptions about the errors of the data, then apply steps 2 and 3.
We looked at which part of the CDF these values are and they are still ok. And our errors are all inferred from measurements, so we trust them quite a bit. We use the fitting described to obtain a particular property of an ion via spectroscopy... that's also why we want to get our errors on that property correct :) In any case, thanks again, Markus
![](https://secure.gravatar.com/avatar/aecfaa98df819a602f8a1405a9a4a7b0.jpg?s=120&d=mm&r=g)
Am 1.6.2012 um 11:21 schrieb Markus Baden:
Hi Gregor,
Thanks for the fast reply.
If you have knowledge about the statistical errors of your data, then skipping step 2 and 3 is the recommended, and you can use the chi square to assess the validity of the fit and your assumptions about the errors. On the other hand, if you have insufficient knowledge about the errors, you can use the reduced chi square as an estimate for the variance of your data (at least under the assumption that the error is the same for all data points). This is the idea behind steps 2 and 3.
I just want to get that straight: So basically in the case where I either don't have errors, or I don't trust them, multiplying the covariance by the reduced chi square amounts to "normalizing" the covariance such that the fit would have a chi square of one (?). Maybe your point could go into the docs for curve_fit... or there could be a comment about standard procedure a bit like in origin (http://www.originlab.com/www/support/resultstech.aspx?ID=452&language=English&Version=All)
Yes, I think you correctly got the idea.
Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful.
I didn't look at your data, but I guess that these values of the reduced chi square are still in range such that they are not a significant deviation from the expected value of one. The chi-squared distribution is rather broad. So I would omit steps 2 and 3. Only if you have good reasons not to trust your assumptions about the errors of the data, then apply steps 2 and 3.
We looked at which part of the CDF these values are and they are still ok. And our errors are all inferred from measurements, so we trust them quite a bit. We use the fitting described to obtain a particular property of an ion via spectroscopy... that's also why we want to get our errors on that property correct :)
As e.g. described nicely in http://mljohnson.pharm.virginia.edu/pdfs/174.pdf you have to be careful about the parameter error estimates obtained by this procedure. In general the obtained results are too optimistic. Gregor
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Fri, Jun 1, 2012 at 8:21 AM, Gregor Thalhammer <gregor.thalhammer@gmail.com> wrote:
Am 1.6.2012 um 11:21 schrieb Markus Baden:
Hi Gregor,
Thanks for the fast reply.
If you have knowledge about the statistical errors of your data, then skipping step 2 and 3 is the recommended, and you can use the chi square to assess the validity of the fit and your assumptions about the errors. On the other hand, if you have insufficient knowledge about the errors, you can use the reduced chi square as an estimate for the variance of your data (at least under the assumption that the error is the same for all data points).
when you use the weighting and the estimated sigma, you still leave the relative weighting unchanged. So we don't really assume that the error is the same for all data points.
This is the idea behind steps 2 and 3.
I just want to get that straight: So basically in the case where I either don't have errors, or I don't trust them, multiplying the covariance by the reduced chi square amounts to "normalizing" the covariance such that the fit would have a chi square of one (?). Maybe your point could go into the docs for curve_fit... or there could be a comment about standard procedure a bit like in origin (http://www.originlab.com/www/support/resultstech.aspx?ID=452&language=English&Version=All)
Yes, I think you correctly got the idea.
My interpretation is that you have to trust your error estimate more than the error estimate from the reduced chi square. In the two extreme scaled_x_errors cases, your belief about the errors is far from the model fit. So either the residual sum of squares is a bad measurement of the error (outliers, ....) or what you think the measurement errors should be doesn't agree with the data. In the example the measurement error is translated from x_errors to y_errors by a linear approximation (at least according to the comments). If this approximation is not good, then it introduces another type of error, that would create a discrepancy between your measurement errors and the reduced chisquare.
Now in the particular problem I am working at, we have a couple of fits like [5] and some of them have a slightly worse reduced chi square of say about 1.4 or 0.7. At this point the two methods start to deviate and I am wondering which would be the correct way of quoting the errors estimated from the fit. Even a basic reference to some text book that explains the method used in scipy would be very helpful.
I didn't look at your data, but I guess that these values of the reduced chi square are still in range such that they are not a significant deviation from the expected value of one. The chi-squared distribution is rather broad. So I would omit steps 2 and 3. Only if you have good reasons not to trust your assumptions about the errors of the data, then apply steps 2 and 3.
We looked at which part of the CDF these values are and they are still ok. And our errors are all inferred from measurements, so we trust them quite a bit. We use the fitting described to obtain a particular property of an ion via spectroscopy... that's also why we want to get our errors on that property correct :)
As e.g. described nicely in http://mljohnson.pharm.virginia.edu/pdfs/174.pdf you have to be careful about the parameter error estimates obtained by this procedure. In general the obtained results are too optimistic.
Thanks, helpful example and discussion. I've never seen an estimated curve that fits so well. extra: With the measurement error in x, the least squares estimator is usually biased. But I only know part of the theory for the linear case and don't have much idea about how this will affect the estimates in the non-linear case. Josef
Gregor
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
participants (3)
-
Gregor Thalhammer
-
josef.pktd@gmail.com
-
Markus Baden