Dear all, I wanted to briefly show you the results of an analysis I did on the performances of the optimize.leastsq method for data fitting. I presented these results at the last Python in Physics workshop. You can download the pdf here: http://emma.inrim.it:8080/gdurin/talks. 1. The main concern is about the use of cov_x to estimate the error bar of the fitting parameters. In the docs, it is set that "this matrix must be multiplied by the residual standard deviation to get the covariance of the parameter estimates -- see curve_fits."" Unfortunately, this is not correct, or better it is only partially correct. It is correct if there are no error bars of the input data (the sigma of curve_fit is None). But if provided, they are used as "as weights in least-squares problem" (curve_fit doc), and cov_x gives directly the covariance of the parameter estimates (i.e. the diagonal terms are the errors in the parameters). See for instance here: http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of.... This means that not only the doc needs fixing, but also the curve_fit code, those estimation of the parameters' error is INDEPENDENT of the values of the data errors in the case they are constant, which is clearly wrong. I have never provided a patch, but the fix should be quite simple, just please give me indication on how to do that. 2. The convergence of the fit in the most difficult cases (see page 15 of my presentation) can required up to about 3000 iterations, reduced to 800 when using analytical derivatives. For quite a long time I did not realized that the fit needed more iterations that the number set by maxfev, and thus I started to think that the leastsq was not good enough for 'hard' data. As a matter of fact, maxfev : int The maximum number of calls to the function. If zero, then 100*(N+1) is the maximum where N is the number of elements in x0. is pretty low, so I suggest to increase the prefactor 100 to 1000. A relatively huge number is not a problem, by the way, because if the system is sloppy. i.e. one parameter does not move too much, the routine stops and complains with "Both actual and predicted relative reductions in the sum of squares are at most 0.000000 and the relative error between two consecutive iterates is at most 0.000000" as in the case of boxBod at pg. 15. By the way, we should also advice that the in case of analytical derivative this number is half, even if I personally would keep the same number for both cases. This is all for the moment. Many thanks for your attention, and sorry for the long mail Gianfranco