efficiency of the simplex routine: R (optim) vs scipy.optimize.fmin
Dear scipy users, Again a question about optimization. I've just compared the efficiency of the simplex routine in R (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster than optim, but appears to be less efficient. In R, the value of the function is always minimized step by step (there are of course some exceptions) while there is lot of fluctuations in python. Given that the underlying simplex algorithm is supposed to be the same, which mechanism is responsible for this difference? Is it possible to constrain fmin so it could be more rigorous? Cheers, Mathieu
Hi Mathieu, (months later) two differences among implementations of Nelder-Mead: 1) the start simplex: x0 +- what ? It's common to take x0 + a fixed (user-specified) stepsize in each dimension. NLOpt takes a "walking simplex", don't know what R does 2) termination: what ftol, xtol did you specify ? NLOpt looks at fhi - flo: fhi changes at each iteration, flo is sticky. Could you post a testcase similar to yours ? That would sure be helpful. cheers -- denis On 24/05/2012 10:15, servant mathieu wrote:
Dear scipy users, Again a question about optimization. I've just compared the efficiency of the simplex routine in R (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster than optim, but appears to be less efficient. In R, the value of the function is always minimized step by step (there are of course some exceptions) while there is lot of fluctuations in python. Given that the underlying simplex algorithm is supposed to be the same, which mechanism is responsible for this difference? Is it possible to constrain fmin so it could be more rigorous? Cheers, Mathieu
Hi Denis, Thanks for your response. For the fmin function in scipy, I took the default ftol and stol values. I'm just trying to minize a chi square between observed experimental data and simulated data. I've done this in python and R with the Nelder-Mead algorithm, with exactly the same starting values. While the solutions produced by R and python are not very different, R systematicaly produces a lower chi-square after the same amount of iterations. This may be related to ftol and stol, but I don't know which value I should give to these parameters.... Cheers, Mat 2012/7/20 denis <denis-bz-gg@t-online.de>
Hi Mathieu, (months later) two differences among implementations of Nelder-Mead: 1) the start simplex: x0 +- what ? It's common to take x0 + a fixed (user-specified) stepsize in each dimension. NLOpt takes a "walking simplex", don't know what R does
2) termination: what ftol, xtol did you specify ? NLOpt looks at fhi - flo: fhi changes at each iteration, flo is sticky.
Could you post a testcase similar to yours ? That would sure be helpful.
cheers -- denis
On 24/05/2012 10:15, servant mathieu wrote:
Dear scipy users, Again a question about optimization. I've just compared the efficiency of the simplex routine in R (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster than optim, but appears to be less efficient. In R, the value of the function is always minimized step by step (there are of course some exceptions) while there is lot of fluctuations in python. Given that the underlying simplex algorithm is supposed to be the same, which mechanism is responsible for this difference? Is it possible to constrain fmin so it could be more rigorous? Cheers, Mathieu
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
Hi, Sorry for crossposting, but I'm trying to replace my "optim" function inside R, converting it to Python using scipy.optimize.fmin and leastsq. In Python, my results are aproximately the same, when the function is setted to run with the right parameters. I have read that fmin needs as a result a single value, so I have to np.sum the return value and with leastsq I don't have to. But, when compared with R results of the same function ( "optim" function), the results are much different. Example below: # Inside python import numpy as np from scipy.optimize import leastsq, fmin def pp_min(params, x, y): alpha = params[0] Beta = params[1] gama = params[2] model = ( (y-gama*(1-np.exp(-alpha*x/gama))*np.exp(-Beta*x/gama))**2) return model params = (0.4, 1.5 , 80) x = np.array([0, 1, 8, 16, 64, 164, 264, 364, 464, 564, 664, 764, 864, 964, 1064, 1164]) y = np.array([0., 0.2436, 1.6128, 2.8224, 6.72, 15.1536, 22.176, 30.576, 31.1808, 40.2696, 47.4096, 41.7144, 61.6896, 56.6832, 62.5632, 63.5544]) pp = leastsq(pp_min, params, args=(x, y)) In [2]: pp Out[2]: (array([ 8.48990490e-02, -1.56537197e-02, 6.51505458e+01]), 1) # inside R x <- c(0, 1, 8, 16, 64, 164, 264, 364, 464, 564, 664, 764, 864, 964, 1064, 1164) y <- c(0.0, 0.2436, 1.6128, 2.8224, 6.72, 15.1536, 22.176, 30.576, 31.1808, 40.2696, 47.4096, 41.7144, 61.6896, 56.6832, 62.5632, 63.5544) dat<-as.data.frame(cbind(x, y)) names(dat)<-c("x","y") attach(dat) pp_min<-function(params,data=dat) { alpha<-params[1] Beta<-params[2] gama<-params[3] return( sum( (y-gama*(1-exp(-alpha*x/gama))*exp(-Beta*x/gama))^2)) } pp <- optim(par=c(0.4, 1.5 , 80), fn=pp_min) Out:
pp $par [1] 0.09157204 0.02129695 148.89173924
The third argument is specially much different. Any reasonable explanation? Thank you. --- *Arnaldo D'Amaral Pereira Granja Russo* Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG 2012/7/23 servant mathieu <servant.mathieu@gmail.com>
Hi Denis, Thanks for your response. For the fmin function in scipy, I took the default ftol and stol values. I'm just trying to minize a chi square between observed experimental data and simulated data. I've done this in python and R with the Nelder-Mead algorithm, with exactly the same starting values. While the solutions produced by R and python are not very different, R systematicaly produces a lower chi-square after the same amount of iterations. This may be related to ftol and stol, but I don't know which value I should give to these parameters.... Cheers, Mat
2012/7/20 denis <denis-bz-gg@t-online.de>
Hi Mathieu, (months later) two differences among implementations of Nelder-Mead: 1) the start simplex: x0 +- what ? It's common to take x0 + a fixed (user-specified) stepsize in each dimension. NLOpt takes a "walking simplex", don't know what R does
2) termination: what ftol, xtol did you specify ? NLOpt looks at fhi - flo: fhi changes at each iteration, flo is sticky.
Could you post a testcase similar to yours ? That would sure be helpful.
cheers -- denis
On 24/05/2012 10:15, servant mathieu wrote:
Dear scipy users, Again a question about optimization. I've just compared the efficiency of the simplex routine in R (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster than optim, but appears to be less efficient. In R, the value of the function is always minimized step by step (there are of course some exceptions) while there is lot of fluctuations in python. Given that the underlying simplex algorithm is supposed to be the same, which mechanism is responsible for this difference? Is it possible to constrain fmin so it could be more rigorous? Cheers, Mathieu
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
On Thu, May 9, 2013 at 4:20 PM, Arnaldo Russo <arnaldorusso@gmail.com> wrote:
Hi,
Sorry for crossposting, but I'm trying to replace my "optim" function inside R, converting it to Python using scipy.optimize.fmin and leastsq. In Python, my results are aproximately the same, when the function is setted to run with the right parameters. I have read that fmin needs as a result a single value, so I have to np.sum the return value and with leastsq I don't have to. But, when compared with R results of the same function ( "optim" function), the results are much different. Example below:
# Inside python import numpy as np from scipy.optimize import leastsq, fmin
def pp_min(params, x, y): alpha = params[0] Beta = params[1] gama = params[2]
model = ( (y-gama*(1-np.exp(-alpha*x/gama))*np.exp(-Beta*x/gama))**2)
return model
params = (0.4, 1.5 , 80) x = np.array([0, 1, 8, 16, 64, 164, 264, 364, 464, 564, 664, 764, 864, 964, 1064, 1164]) y = np.array([0., 0.2436, 1.6128, 2.8224, 6.72, 15.1536, 22.176, 30.576, 31.1808, 40.2696, 47.4096, 41.7144, 61.6896, 56.6832, 62.5632, 63.5544])
pp = leastsq(pp_min, params, args=(x, y))
for leastsq the function should not be squared, taking square and sum is part of the algorithm. drop the **2 in pp_min and it should work from the docstring: x = arg min(sum(func(y)**2,axis=0)) y Josef
In [2]: pp Out[2]: (array([ 8.48990490e-02, -1.56537197e-02, 6.51505458e+01]), 1)
# inside R x <- c(0, 1, 8, 16, 64, 164, 264, 364, 464, 564, 664, 764, 864, 964, 1064, 1164)
y <- c(0.0, 0.2436, 1.6128, 2.8224, 6.72, 15.1536, 22.176, 30.576, 31.1808, 40.2696, 47.4096, 41.7144, 61.6896, 56.6832, 62.5632, 63.5544)
dat<-as.data.frame(cbind(x, y)) names(dat)<-c("x","y") attach(dat)
pp_min<-function(params,data=dat) { alpha<-params[1] Beta<-params[2] gama<-params[3] return( sum( (y-gama*(1-exp(-alpha*x/gama))*exp(-Beta*x/gama))^2)) }
pp <- optim(par=c(0.4, 1.5 , 80), fn=pp_min)
Out:
pp $par [1] 0.09157204 0.02129695 148.89173924
The third argument is specially much different. Any reasonable explanation?
Thank you.
--- Arnaldo D'Amaral Pereira Granja Russo Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG
2012/7/23 servant mathieu <servant.mathieu@gmail.com>
Hi Denis, Thanks for your response. For the fmin function in scipy, I took the default ftol and stol values. I'm just trying to minize a chi square between observed experimental data and simulated data. I've done this in python and R with the Nelder-Mead algorithm, with exactly the same starting values. While the solutions produced by R and python are not very different, R systematicaly produces a lower chi-square after the same amount of iterations. This may be related to ftol and stol, but I don't know which value I should give to these parameters.... Cheers, Mat
2012/7/20 denis <denis-bz-gg@t-online.de>
Hi Mathieu, (months later) two differences among implementations of Nelder-Mead: 1) the start simplex: x0 +- what ? It's common to take x0 + a fixed (user-specified) stepsize in each dimension. NLOpt takes a "walking simplex", don't know what R does
2) termination: what ftol, xtol did you specify ? NLOpt looks at fhi - flo: fhi changes at each iteration, flo is sticky.
Could you post a testcase similar to yours ? That would sure be helpful.
cheers -- denis
On 24/05/2012 10:15, servant mathieu wrote:
Dear scipy users, Again a question about optimization. I've just compared the efficiency of the simplex routine in R (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster than optim, but appears to be less efficient. In R, the value of the function is always minimized step by step (there are of course some exceptions) while there is lot of fluctuations in python. Given that the underlying simplex algorithm is supposed to be the same, which mechanism is responsible for this difference? Is it possible to constrain fmin so it could be more rigorous? Cheers, Mathieu
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
Hi Josef, thank you for quick reply. Changing my pp_min, without square, I have the results: Out[4]: (array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]), 1) It's not much different from previously results. Any reason for this? --- *Arnaldo D'Amaral Pereira Granja Russo* Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG 2013/5/9 <josef.pktd@gmail.com>
On Thu, May 9, 2013 at 4:20 PM, Arnaldo Russo <arnaldorusso@gmail.com> wrote:
Hi,
Sorry for crossposting, but I'm trying to replace my "optim" function inside R, converting it to Python using scipy.optimize.fmin and leastsq. In Python, my results are aproximately the same, when the function is setted to run with the right parameters. I have read that fmin needs as a result a single value, so I have to np.sum the return value and with leastsq I don't have to. But, when compared with R results of the same function ( "optim" function), the results are much different. Example below:
# Inside python import numpy as np from scipy.optimize import leastsq, fmin
def pp_min(params, x, y): alpha = params[0] Beta = params[1] gama = params[2]
model = ( (y-gama*(1-np.exp(-alpha*x/gama))*np.exp(-Beta*x/gama))**2)
return model
params = (0.4, 1.5 , 80) x = np.array([0, 1, 8, 16, 64, 164, 264, 364, 464, 564, 664, 764, 864, 964, 1064, 1164]) y = np.array([0., 0.2436, 1.6128, 2.8224, 6.72, 15.1536, 22.176, 30.576, 31.1808, 40.2696, 47.4096, 41.7144, 61.6896, 56.6832, 62.5632, 63.5544])
pp = leastsq(pp_min, params, args=(x, y))
for leastsq the function should not be squared, taking square and sum is part of the algorithm. drop the **2 in pp_min and it should work
from the docstring:
x = arg min(sum(func(y)**2,axis=0)) y
Josef
In [2]: pp Out[2]: (array([ 8.48990490e-02, -1.56537197e-02, 6.51505458e+01]),
1)
# inside R x <- c(0, 1, 8, 16, 64, 164, 264, 364, 464, 564, 664, 764, 864, 964,
1064,
1164)
y <- c(0.0, 0.2436, 1.6128, 2.8224, 6.72, 15.1536, 22.176, 30.576, 31.1808, 40.2696, 47.4096, 41.7144, 61.6896, 56.6832, 62.5632, 63.5544)
dat<-as.data.frame(cbind(x, y)) names(dat)<-c("x","y") attach(dat)
pp_min<-function(params,data=dat) { alpha<-params[1] Beta<-params[2] gama<-params[3] return( sum( (y-gama*(1-exp(-alpha*x/gama))*exp(-Beta*x/gama))^2)) }
pp <- optim(par=c(0.4, 1.5 , 80), fn=pp_min)
Out:
pp $par [1] 0.09157204 0.02129695 148.89173924
The third argument is specially much different. Any reasonable explanation?
Thank you.
--- Arnaldo D'Amaral Pereira Granja Russo Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG
2012/7/23 servant mathieu <servant.mathieu@gmail.com>
Hi Denis, Thanks for your response. For the fmin function in scipy, I took the default ftol and stol values. I'm just trying to minize a chi square
between
observed experimental data and simulated data. I've done this in python and R with the Nelder-Mead algorithm, with exactly the same starting values. While the solutions produced by R and python are not very different, R systematicaly produces a lower chi-square after the same amount of iterations. This may be related to ftol and stol, but I don't know which value I should give to these parameters.... Cheers, Mat
2012/7/20 denis <denis-bz-gg@t-online.de>
Hi Mathieu, (months later) two differences among implementations of Nelder-Mead: 1) the start simplex: x0 +- what ? It's common to take x0 + a fixed (user-specified) stepsize in each dimension. NLOpt takes a "walking simplex", don't know what R does
2) termination: what ftol, xtol did you specify ? NLOpt looks at fhi - flo: fhi changes at each iteration, flo is sticky.
Could you post a testcase similar to yours ? That would sure be helpful.
cheers -- denis
On 24/05/2012 10:15, servant mathieu wrote:
Dear scipy users, Again a question about optimization. I've just compared the efficiency of the simplex routine in R (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster than optim, but appears to be less efficient. In R, the value of the function is always minimized step by step (there are of course some exceptions) while there is lot of fluctuations in python. Given that the underlying simplex algorithm is supposed to be the same, which mechanism is responsible for this difference? Is it possible to constrain fmin
so
it could be more rigorous? Cheers, Mathieu
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
10.05.2013 00:16, Arnaldo Russo kirjoitti:
Hi Josef, thank you for quick reply.
Changing my pp_min, without square, I have the results:
Out[4]: (array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]), 1)
It's not much different from previously results. Any reason for this?
The chi^2 for the solution found by leastsq is smaller than for that found by R. -- Pauli Virtanen
10.05.2013 00:31, Pauli Virtanen kirjoitti:
10.05.2013 00:16, Arnaldo Russo kirjoitti:
Hi Josef, thank you for quick reply.
Changing my pp_min, without square, I have the results:
Out[4]: (array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]), 1)
It's not much different from previously results. Any reason for this?
The chi^2 for the solution found by leastsq is smaller than for that found by R.
I.e., your function has several local minima, and the solver in R doesn't happen to pick the lowest one. This is a common occurrence with local minimzation algorithms --- different algorithms may find different local minima. -- Pauli Virtanen
Hi Pauli, I didn't understand. I thought that the output was the solution of my best values of "alpha", "beta" and "gama". But a much lower value of gama is a best choice? The R solver picked a double value while comparing with python results. I'm asking these things because I want to plot a fit curve with these parameters and I don't know how. Thank you --- *Arnaldo D'Amaral Pereira Granja Russo* Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG 2013/5/9 Pauli Virtanen <pav@iki.fi>
Hi Josef, thank you for quick reply.
Changing my pp_min, without square, I have the results:
Out[4]: (array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]),
10.05.2013 00:16, Arnaldo Russo kirjoitti: 1)
It's not much different from previously results. Any reason for this?
The chi^2 for the solution found by leastsq is smaller than for that found by R.
-- Pauli Virtanen
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
Hi Arnaldo, On Thu, May 9, 2013 at 4:43 PM, Arnaldo Russo <arnaldorusso@gmail.com>wrote:
I didn't understand. I thought that the output was the solution of my best values of "alpha", "beta" and "gama". But a much lower value of gama is a best choice? The R solver picked a double value while comparing with python results. I'm asking these things because I want to plot a fit curve with these parameters and I don't know how.
The solver tries to find the parameter (alpha, beta, gama) values which minimizes the sum squared value of the return of your function, `pp_min()`. When I try the Python and R solution: sum(pp_min(array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]), x, y), axis=0) Out[23]: 153.42871102569131 >>> pp_R = array([0.09157204, 0.02129695, 148.89173924]) sum(pp_min(pp_R, x, y), axis=0) Out[21]: 155.07002552970221 So it seems like Python solver found better solution than the R one. You might want to fix alpha, beta parameters and draw plots with different values of gama to see what is going on. Best, Joon
Dear Arnaldo, I think what Pauli is trying to say is that the algorithm does not guarantee that the *global* minimum will be found, it proceeds with conjugate gradients or the like to find a local minimum starting from the initial parameter, values, so you might very well converge to a local minimum, but miss the global one. Maybe your function is ill-behaved, it seems to be "scale invariant" : X=x/gamma Y=y/gamma turns your function into g(X,Y) that depends on gamma only through an overall factor, if I read your formula correctly. cheers, Johann On 05/09/2013 11:43 PM, Arnaldo Russo wrote:
Hi Pauli,
I didn't understand. I thought that the output was the solution of my best values of "alpha", "beta" and "gama". But a much lower value of gama is a best choice? The R solver picked a double value while comparing with python results. I'm asking these things because I want to plot a fit curve with these parameters and I don't know how.
Thank you --- *Arnaldo D'Amaral Pereira Granja Russo* Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG
2013/5/9 Pauli Virtanen <pav@iki.fi <mailto:pav@iki.fi>>
10.05.2013 00:16, Arnaldo Russo kirjoitti: > Hi Josef, > thank you for quick reply. > > Changing my pp_min, without square, I have the results: > > Out[4]: (array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]), 1) > > It's not much different from previously results. Any reason for this?
The chi^2 for the solution found by leastsq is smaller than for that found by R.
-- Pauli Virtanen
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org <mailto:SciPy-User@scipy.org> http://mail.scipy.org/mailman/listinfo/scipy-user
-- This message has been scanned for viruses and dangerous content by *MailScanner* <http://www.mailscanner.info/>, and is believed to be clean.
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
Thank you so much for explanations. Now I understood the minimum solution. My question is, if I put those minimized values as parameters of my pp_min function, I'll get values of adjusted curve? Arnaldo. --- *Arnaldo D'Amaral Pereira Granja Russo* Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG 2013/5/10 Johann Cohen-Tanugi <johann.cohentanugi@gmail.com>
Dear Arnaldo, I think what Pauli is trying to say is that the algorithm does not guarantee that the *global* minimum will be found, it proceeds with conjugate gradients or the like to find a local minimum starting from the initial parameter, values, so you might very well converge to a local minimum, but miss the global one.
Maybe your function is ill-behaved, it seems to be "scale invariant" : X=x/gamma Y=y/gamma turns your function into g(X,Y) that depends on gamma only through an overall factor, if I read your formula correctly.
cheers, Johann
On 05/09/2013 11:43 PM, Arnaldo Russo wrote:
Hi Pauli,
I didn't understand. I thought that the output was the solution of my best values of "alpha", "beta" and "gama". But a much lower value of gama is a best choice? The R solver picked a double value while comparing with python results. I'm asking these things because I want to plot a fit curve with these parameters and I don't know how.
Thank you --- *Arnaldo D'Amaral Pereira Granja Russo*
Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG
2013/5/9 Pauli Virtanen <pav@iki.fi <mailto:pav@iki.fi>>
10.05.2013 00:16, Arnaldo Russo kirjoitti: > Hi Josef, > thank you for quick reply. > > Changing my pp_min, without square, I have the results: > > Out[4]: (array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]), 1) > > It's not much different from previously results. Any reason for this?
The chi^2 for the solution found by leastsq is smaller than for that found by R.
-- Pauli Virtanen
______________________________**_________________ SciPy-User mailing list SciPy-User@scipy.org <mailto:SciPy-User@scipy.org> http://mail.scipy.org/mailman/**listinfo/scipy-user<http://mail.scipy.org/mailman/listinfo/scipy-user>
-- This message has been scanned for viruses and dangerous content by *MailScanner* <http://www.mailscanner.info/>**, and is believed to be clean.
______________________________**_________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/**listinfo/scipy-user<http://mail.scipy.org/mailman/listinfo/scipy-user>
Hi, Yes, as Joon wrote, you can use the two sets as parameters for your function. Cheers, 2013/5/10 Arnaldo Russo <arnaldorusso@gmail.com>
Thank you so much for explanations.
Now I understood the minimum solution. My question is, if I put those minimized values as parameters of my pp_min function, I'll get values of adjusted curve?
Arnaldo.
--- *Arnaldo D'Amaral Pereira Granja Russo* Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG
2013/5/10 Johann Cohen-Tanugi <johann.cohentanugi@gmail.com>
Dear Arnaldo, I think what Pauli is trying to say is that the algorithm does not guarantee that the *global* minimum will be found, it proceeds with conjugate gradients or the like to find a local minimum starting from the initial parameter, values, so you might very well converge to a local minimum, but miss the global one.
Maybe your function is ill-behaved, it seems to be "scale invariant" : X=x/gamma Y=y/gamma turns your function into g(X,Y) that depends on gamma only through an overall factor, if I read your formula correctly.
cheers, Johann
On 05/09/2013 11:43 PM, Arnaldo Russo wrote:
Hi Pauli,
I didn't understand. I thought that the output was the solution of my best values of "alpha", "beta" and "gama". But a much lower value of gama is a best choice? The R solver picked a double value while comparing with python results. I'm asking these things because I want to plot a fit curve with these parameters and I don't know how.
Thank you --- *Arnaldo D'Amaral Pereira Granja Russo*
Lab. de Estudos dos Oceanos e Clima Instituto de Oceanografia - FURG
2013/5/9 Pauli Virtanen <pav@iki.fi <mailto:pav@iki.fi>>
10.05.2013 00:16, Arnaldo Russo kirjoitti: > Hi Josef, > thank you for quick reply. > > Changing my pp_min, without square, I have the results: > > Out[4]: (array([ 9.82259647e-02, -1.58338152e-02, 5.03125198e+01]), 1) > > It's not much different from previously results. Any reason for this?
The chi^2 for the solution found by leastsq is smaller than for that found by R.
-- Pauli Virtanen
______________________________**_________________ SciPy-User mailing list SciPy-User@scipy.org <mailto:SciPy-User@scipy.org> http://mail.scipy.org/mailman/**listinfo/scipy-user<http://mail.scipy.org/mailman/listinfo/scipy-user>
-- This message has been scanned for viruses and dangerous content by *MailScanner* <http://www.mailscanner.info/>**, and is believed to be clean.
______________________________**_________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/**listinfo/scipy-user<http://mail.scipy.org/mailman/listinfo/scipy-user>
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
-- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/
participants (8)
-
Arnaldo Russo -
denis -
Johann Cohen-Tanugi -
Joon Ro -
josef.pktd@gmail.com -
Matthieu Brucher -
Pauli Virtanen -
servant mathieu