On Sun, Jun 16, 2013 at 3:24 AM, Thomas Robitaille <thomas.robitaille@gmail.com> wrote:
Hi everyone,

I have a question regarding the output from the
scipy.optimize.curve_fit function - in the following example:

"""
    In [1]: import numpy as np

    In [2]: from scipy.optimize import curve_fit

    In [3]: f = lambda x, a, b: a * x + b

    In [4]: x = np.array([0., 1., 2.])

    In [5]: y = np.array([1.2, 4.6, 7.8])

    In [6]: e = np.array([1., 1., 1.])

    In [7]: curve_fit(f, x, y, sigma=e)
    Out[7]:
    (array([ 3.3       ,  1.23333333]),
     array([[ 0.00333333, -0.00333333],
           [-0.00333333,  0.00555556]]))

    In [8]: curve_fit(f, x, y, sigma=e * 100)
    Out[8]:
    (array([ 3.3       ,  1.23333333]),
     array([[ 0.00333333, -0.00333333],
           [-0.00333333,  0.00555556]]))
"""

it's clear that the covariance matrix does not take into account the
uncertainties on the data points. If I do:

"""
popt, pcov = curve_fit(...)
"""

Then pcov[0,0]**0.5 is therefore not the uncertainty on the parameter,
so I was wondering how this should be scaled to give the actual
uncertainty on the parameter?

There was a long discussion by email and then github on this:

http://mail.scipy.org/pipermail/scipy-user/2011-August/030412.html
https://github.com/scipy/scipy/pull/448

The open pull request has the code to do the scaling you want.

- Tom
 

Thanks!
Tom
_______________________________________________
SciPy-User mailing list
SciPy-User@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user