![](https://secure.gravatar.com/avatar/f586e2a9879fe0a55fd1e3ea529c664f.jpg?s=120&d=mm&r=g)
Hi everyone, I have a question regarding the output from the scipy.optimize.curve_fit function - in the following example: """ In [1]: import numpy as np In [2]: from scipy.optimize import curve_fit In [3]: f = lambda x, a, b: a * x + b In [4]: x = np.array([0., 1., 2.]) In [5]: y = np.array([1.2, 4.6, 7.8]) In [6]: e = np.array([1., 1., 1.]) In [7]: curve_fit(f, x, y, sigma=e) Out[7]: (array([ 3.3 , 1.23333333]), array([[ 0.00333333, -0.00333333], [-0.00333333, 0.00555556]])) In [8]: curve_fit(f, x, y, sigma=e * 100) Out[8]: (array([ 3.3 , 1.23333333]), array([[ 0.00333333, -0.00333333], [-0.00333333, 0.00555556]])) """ it's clear that the covariance matrix does not take into account the uncertainties on the data points. If I do: """ popt, pcov = curve_fit(...) """ Then pcov[0,0]**0.5 is therefore not the uncertainty on the parameter, so I was wondering how this should be scaled to give the actual uncertainty on the parameter? Thanks! Tom