polyfit with fixed points
A couple of days back, answering a question in StackExchange ( http://stackoverflow.com/a/15196628/110026), I found myself using Lagrange multipliers to fit a polynomial with least squares to data, making sure it went through some fixed points. This time it was relatively easy, because some 5 years ago I came across the same problem in real life, and spent the better part of a week banging my head against it. Even knowing what you are doing, it is far from simple, and in my own experience very useful: I think the only time ever I have fitted a polynomial to data with a definite purpose, it required that some points were fixed. Seeing that polyfit is entirely coded in python, it would be relatively straightforward to add support for fixed points. It is also something I feel capable, and willing, of doing. * Is such an additional feature something worthy of investigating, or will it never find its way into numpy.polyfit? * Any ideas on the best syntax for the extra parameters? Thanks, Jaime  (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
Interesting, that question would probably have gotten a different response on scicomp, it is a pity we are not attracting more questions there! I know there are two polyfit modules in numpy, one in numpy.polyfit, the other in numpy.polynomial, the functionality you are suggesting seems to fit in either. What parameters/functionality are you considering adding? A On Mon, Mar 4, 2013 at 7:23 PM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
A couple of days back, answering a question in StackExchange ( http://stackoverflow.com/a/15196628/110026), I found myself using Lagrange multipliers to fit a polynomial with least squares to data, making sure it went through some fixed points. This time it was relatively easy, because some 5 years ago I came across the same problem in real life, and spent the better part of a week banging my head against it. Even knowing what you are doing, it is far from simple, and in my own experience very useful: I think the only time ever I have fitted a polynomial to data with a definite purpose, it required that some points were fixed.
Seeing that polyfit is entirely coded in python, it would be relatively straightforward to add support for fixed points. It is also something I feel capable, and willing, of doing.
* Is such an additional feature something worthy of investigating, or will it never find its way into numpy.polyfit? * Any ideas on the best syntax for the extra parameters?
Thanks,
Jaime
 (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Mon, Mar 4, 2013 at 4:53 PM, Aron Ahmadia <aron@ahmadia.net> wrote:
Interesting, that question would probably have gotten a different response on scicomp, it is a pity we are not attracting more questions there!
I know there are two polyfit modules in numpy, one in numpy.polyfit, the other in numpy.polynomial, the functionality you are suggesting seems to fit in either.
What parameters/functionality are you considering adding?
Well, you need two more arraylikes, x_fixed and y_fixed, which could probably be fed to polyfit as an optional tuple parameter: polyfit(x, y, deg, fixed_points=(x_fixed, y_fixed)) The standard return would still be the deg + 1 coefficients of the fitted polynomial, so the workings would be perfectly backwards compatible. An optional return, either when full=True, or by setting an additional lagrange_mult=True flag, could include the values of the Lagrange multipliers calculated during the fit. Jaime
A
On Mon, Mar 4, 2013 at 7:23 PM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
A couple of days back, answering a question in StackExchange ( http://stackoverflow.com/a/15196628/110026), I found myself using Lagrange multipliers to fit a polynomial with least squares to data, making sure it went through some fixed points. This time it was relatively easy, because some 5 years ago I came across the same problem in real life, and spent the better part of a week banging my head against it. Even knowing what you are doing, it is far from simple, and in my own experience very useful: I think the only time ever I have fitted a polynomial to data with a definite purpose, it required that some points were fixed.
Seeing that polyfit is entirely coded in python, it would be relatively straightforward to add support for fixed points. It is also something I feel capable, and willing, of doing.
* Is such an additional feature something worthy of investigating, or will it never find its way into numpy.polyfit? * Any ideas on the best syntax for the extra parameters?
Thanks,
Jaime
 (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
 (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
On Mon, Mar 4, 2013 at 5:53 PM, Aron Ahmadia <aron@ahmadia.net> wrote:
Interesting, that question would probably have gotten a different response on scicomp, it is a pity we are not attracting more questions there!
I know there are two polyfit modules in numpy, one in numpy.polyfit, the other in numpy.polynomial, the functionality you are suggesting seems to fit in either.
What parameters/functionality are you considering adding?
A
The discussion list convention is to bottom post. <snip> Chuck
On Mon, Mar 4, 2013 at 5:23 PM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
A couple of days back, answering a question in StackExchange ( http://stackoverflow.com/a/15196628/110026), I found myself using Lagrange multipliers to fit a polynomial with least squares to data, making sure it went through some fixed points. This time it was relatively easy, because some 5 years ago I came across the same problem in real life, and spent the better part of a week banging my head against it. Even knowing what you are doing, it is far from simple, and in my own experience very useful: I think the only time ever I have fitted a polynomial to data with a definite purpose, it required that some points were fixed.
Seeing that polyfit is entirely coded in python, it would be relatively straightforward to add support for fixed points. It is also something I feel capable, and willing, of doing.
* Is such an additional feature something worthy of investigating, or will it never find its way into numpy.polyfit? * Any ideas on the best syntax for the extra parameters?
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;) How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq. Chuck
On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris <charlesr.harris@gmail.com>wrote:
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;)
Correct me if I am wrong, but the fitted function is the same regardless of the polynomial basis used. I don't know if there can be numerical stability issues, but chebfit(x, y, n) returns the same as poly2cheb(polyfit(x, y, n)). In any case, with all the already existing support for these special polynomials, it wouldn't be too hard to set the problem up to calculate the right coefficients directly for each case.
How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq.
The weights method is already in place, but I find it rather inelegant and unsatisfactory as a solution to this problem. But if it is deemed sufficient, then there is of course no need to go any further. I hadn't thought of any other way than using Lagrange multipliers, but looking at it in more detail, I am not sure it will be possible to formulate it in a manner that can be fed to lstsq, as polyfit does today. And if it can't, it probably wouldn't make much sense to have two different methods which cannot produce the same full output running under the same hood. I can't figure out your "change of variable" method from the succinct description, could you elaborate a little more? Jaime  (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;)
Correct me if I am wrong, but the fitted function is the same regardless of the polynomial basis used. I don't know if there can be numerical stability issues, but chebfit(x, y, n) returns the same as poly2cheb(polyfit(x, y, n)).
In any case, with all the already existing support for these special polynomials, it wouldn't be too hard to set the problem up to calculate the right coefficients directly for each case.
How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq.
The weights method is already in place, but I find it rather inelegant and unsatisfactory as a solution to this problem. But if it is deemed sufficient, then there is of course no need to go any further.
I hadn't thought of any other way than using Lagrange multipliers, but looking at it in more detail, I am not sure it will be possible to formulate it in a manner that can be fed to lstsq, as polyfit does today. And if it can't, it probably wouldn't make much sense to have two different methods which cannot produce the same full output running under the same hood.
I can't figure out your "change of variable" method from the succinct description, could you elaborate a little more?
I think the place to add this is to lstsq as linear constraints. That is, the coefficients must satisfy B * c = y_c for some set of equations B. In the polynomial case the rows of B would be the powers of x at the points you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the design matrix of the unconstrained points A' = A * v.T so that B' becomes u * d. The coefficients are now replaced by new variables c' with the contraints in the first two columns. If there are, say, 2 constraints, u * d will be 2x2. Solve that equation for the first two constraints then multiply the first two columns of the design matrix A' by the result and put them on the rhs, i.e., y = y  A'[:, :2] * c'[:2] then solve the usual l least squares thing with A[:, 2:] * c'[2:] = y to get the rest of the transformed coefficients c'. Put the coefficients altogether and multiply with v^T to get c = v^T * c' Chuck
On Tue, Mar 5, 2013 at 6:23 AM, Charles R Harris <charlesr.harris@gmail.com>wrote:
On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;)
Correct me if I am wrong, but the fitted function is the same regardless of the polynomial basis used. I don't know if there can be numerical stability issues, but chebfit(x, y, n) returns the same as poly2cheb(polyfit(x, y, n)).
In any case, with all the already existing support for these special polynomials, it wouldn't be too hard to set the problem up to calculate the right coefficients directly for each case.
How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq.
The weights method is already in place, but I find it rather inelegant and unsatisfactory as a solution to this problem. But if it is deemed sufficient, then there is of course no need to go any further.
I hadn't thought of any other way than using Lagrange multipliers, but looking at it in more detail, I am not sure it will be possible to formulate it in a manner that can be fed to lstsq, as polyfit does today. And if it can't, it probably wouldn't make much sense to have two different methods which cannot produce the same full output running under the same hood.
I can't figure out your "change of variable" method from the succinct description, could you elaborate a little more?
I think the place to add this is to lstsq as linear constraints. That is, the coefficients must satisfy B * c = y_c for some set of equations B. In the polynomial case the rows of B would be the powers of x at the points you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the design matrix of the unconstrained points A' = A * v.T so that B' becomes u * d. The coefficients are now replaced by new variables c' with the contraints in the first two columns. If there are, say, 2 constraints, u * d will be 2x2. Solve that equation for the first two constraints then multiply the first two columns of the design matrix A' by the result and put them on the rhs, i.e.,
y = y  A'[:, :2] * c'[:2]
then solve the usual l least squares thing with
A[:, 2:] * c'[2:] = y
to get the rest of the transformed coefficients c'. Put the coefficients altogether and multiply with v^T to get
c = v^T * c'
There are a few missing `'` in there, but I think you can get the idea, we are making the substitution c = v^T * c'. Chuck
On Tue, Mar 5, 2013 at 5:23 AM, Charles R Harris <charlesr.harris@gmail.com>wrote:
On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;)
Correct me if I am wrong, but the fitted function is the same regardless of the polynomial basis used. I don't know if there can be numerical stability issues, but chebfit(x, y, n) returns the same as poly2cheb(polyfit(x, y, n)).
In any case, with all the already existing support for these special polynomials, it wouldn't be too hard to set the problem up to calculate the right coefficients directly for each case.
How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq.
The weights method is already in place, but I find it rather inelegant and unsatisfactory as a solution to this problem. But if it is deemed sufficient, then there is of course no need to go any further.
I hadn't thought of any other way than using Lagrange multipliers, but looking at it in more detail, I am not sure it will be possible to formulate it in a manner that can be fed to lstsq, as polyfit does today. And if it can't, it probably wouldn't make much sense to have two different methods which cannot produce the same full output running under the same hood.
I can't figure out your "change of variable" method from the succinct description, could you elaborate a little more?
I think the place to add this is to lstsq as linear constraints. That is, the coefficients must satisfy B * c = y_c for some set of equations B. In the polynomial case the rows of B would be the powers of x at the points you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the design matrix of the unconstrained points A' = A * v.T so that B' becomes u * d. The coefficients are now replaced by new variables c' with the contraints in the first two columns. If there are, say, 2 constraints, u * d will be 2x2. Solve that equation for the first two constraints then multiply the first two columns of the design matrix A' by the result and put them on the rhs, i.e.,
y = y  A'[:, :2] * c'[:2]
then solve the usual l least squares thing with
A[:, 2:] * c'[2:] = y
to get the rest of the transformed coefficients c'. Put the coefficients altogether and multiply with v^T to get
c = v^T * c'
Very nice, and works beautifully! I have tried the method you describe, and there are a few relevant observations: 1. It gives the exact same result as the Lagrange multiplier approach, which is probably expected, but I wasn't all that sure it would be the case. 2. The result also seems to be to what the sequence of fits giving increasing weights to the fixed points converges to. This image http://i1092.photobucket.com/albums/i412/jfrio/image.png is an example. In there: * blue crosses are the data points to fit to * red points are the fixed points * blue line is the standard polyfit * red line is the constrained polyfit * cyan, magenta, yellow and black are polyfits with weights of 2, 4, 8, 16 for the fixed points, 1 for the rest Seeing this last point, probably the cleanest, least disruptive implementation of this, would be to allow np.inf values in the weights parameter, which would get filtered out, and dealt with in the above manner. So I have two questions: 1. Does this make sense? Or will it be better to make it more explicit, with a 'fixed_points' keyword argument defaulting to None? 2. Once I have this implemented, documented and tested... How do I go about submitting it for consideration? Would a patch be the way to go, or should I fork? Thanks, Jaime
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
 (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
On Wed, Mar 6, 2013 at 4:52 PM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Tue, Mar 5, 2013 at 5:23 AM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;)
Correct me if I am wrong, but the fitted function is the same regardless of the polynomial basis used. I don't know if there can be numerical stability issues, but chebfit(x, y, n) returns the same as poly2cheb(polyfit(x, y, n)).
In any case, with all the already existing support for these special polynomials, it wouldn't be too hard to set the problem up to calculate the right coefficients directly for each case.
How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq.
The weights method is already in place, but I find it rather inelegant and unsatisfactory as a solution to this problem. But if it is deemed sufficient, then there is of course no need to go any further.
I hadn't thought of any other way than using Lagrange multipliers, but looking at it in more detail, I am not sure it will be possible to formulate it in a manner that can be fed to lstsq, as polyfit does today. And if it can't, it probably wouldn't make much sense to have two different methods which cannot produce the same full output running under the same hood.
I can't figure out your "change of variable" method from the succinct description, could you elaborate a little more?
I think the place to add this is to lstsq as linear constraints. That is, the coefficients must satisfy B * c = y_c for some set of equations B. In the polynomial case the rows of B would be the powers of x at the points you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the design matrix of the unconstrained points A' = A * v.T so that B' becomes u * d. The coefficients are now replaced by new variables c' with the contraints in the first two columns. If there are, say, 2 constraints, u * d will be 2x2. Solve that equation for the first two constraints then multiply the first two columns of the design matrix A' by the result and put them on the rhs, i.e.,
y = y  A'[:, :2] * c'[:2]
then solve the usual l least squares thing with
A[:, 2:] * c'[2:] = y
to get the rest of the transformed coefficients c'. Put the coefficients altogether and multiply with v^T to get
c = v^T * c'
Very nice, and works beautifully! I have tried the method you describe, and there are a few relevant observations:
1. It gives the exact same result as the Lagrange multiplier approach, which is probably expected, but I wasn't all that sure it would be the case.
It's equivalent, but I'm thinking in algorithmic terms, which is somewhat more specific than the mathematical formulation.
2. The result also seems to be to what the sequence of fits giving increasing weights to the fixed points converges to. This image http://i1092.photobucket.com/albums/i412/jfrio/image.png is an example. In there: * blue crosses are the data points to fit to * red points are the fixed points * blue line is the standard polyfit * red line is the constrained polyfit * cyan, magenta, yellow and black are polyfits with weights of 2, 4, 8, 16 for the fixed points, 1 for the rest
Seeing this last point, probably the cleanest, least disruptive implementation of this, would be to allow np.inf values in the weights parameter, which would get filtered out, and dealt with in the above manner.
Interesting idea, I like it. It is less general, but probably all that is needed for polynomial fits. I suppose that after you pull out the relevant rows you can set the weights to zero so that they will have no (order of roundoff) effect on the remaining fit and you don't need to rewrite the design matrix.
So I have two questions:
1. Does this make sense? Or will it be better to make it more explicit, with a 'fixed_points' keyword argument defaulting to None? 2. Once I have this implemented, documented and tested... How do I go about submitting it for consideration? Would a patch be the way to go, or should I fork?
A fork is definitely the way to go. That makes it easy for folks to review the code and tell you everything you did wrong ;) I think adding linear constraints to lstsq would be good, then the upper level routines can make use of them. Something like a new argument constraints=(B, y), with None the default. Chuck
Hi, On Thu, Mar 7, 2013 at 1:52 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Tue, Mar 5, 2013 at 5:23 AM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;)
Correct me if I am wrong, but the fitted function is the same regardless of the polynomial basis used. I don't know if there can be numerical stability issues, but chebfit(x, y, n) returns the same as poly2cheb(polyfit(x, y, n)).
In any case, with all the already existing support for these special polynomials, it wouldn't be too hard to set the problem up to calculate the right coefficients directly for each case.
How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq.
The weights method is already in place, but I find it rather inelegant and unsatisfactory as a solution to this problem. But if it is deemed sufficient, then there is of course no need to go any further.
I hadn't thought of any other way than using Lagrange multipliers, but looking at it in more detail, I am not sure it will be possible to formulate it in a manner that can be fed to lstsq, as polyfit does today. And if it can't, it probably wouldn't make much sense to have two different methods which cannot produce the same full output running under the same hood.
I can't figure out your "change of variable" method from the succinct description, could you elaborate a little more?
I think the place to add this is to lstsq as linear constraints. That is, the coefficients must satisfy B * c = y_c for some set of equations B. In the polynomial case the rows of B would be the powers of x at the points you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the design matrix of the unconstrained points A' = A * v.T so that B' becomes u * d. The coefficients are now replaced by new variables c' with the contraints in the first two columns. If there are, say, 2 constraints, u * d will be 2x2. Solve that equation for the first two constraints then multiply the first two columns of the design matrix A' by the result and put them on the rhs, i.e.,
y = y  A'[:, :2] * c'[:2]
then solve the usual l least squares thing with
A[:, 2:] * c'[2:] = y
to get the rest of the transformed coefficients c'. Put the coefficients altogether and multiply with v^T to get
c = v^T * c'
Very nice, and works beautifully! I have tried the method you describe, and there are a few relevant observations:
1. It gives the exact same result as the Lagrange multiplier approach, which is probably expected, but I wasn't all that sure it would be the case. 2. The result also seems to be to what the sequence of fits giving increasing weights to the fixed points converges to. This image http://i1092.photobucket.com/albums/i412/jfrio/image.png is an example. In there: * blue crosses are the data points to fit to * red points are the fixed points * blue line is the standard polyfit * red line is the constrained polyfit * cyan, magenta, yellow and black are polyfits with weights of 2, 4, 8, 16 for the fixed points, 1 for the rest
Seeing this last point, probably the cleanest, least disruptive implementation of this, would be to allow np.inf values in the weights parameter, which would get filtered out, and dealt with in the above manner.
Just to point out that a very simple approach is where one just multiply the constraints with big enough number M, like: In []: def V(x, n= None): ....: """Polynomial package compatible Vandermonde 'matrix'""" ....: return vander(x, n)[:, ::1] ....: In []: def clsq(A, b, C, d, M= 1e5): ....: """A simple constrained least squared solution of Ax= b, s.t. Cx= d""" ....: return solve(dot(A.T, A)+ M* dot(C.T, C), dot(A.T, b)+ M* dot(C.T, d)) ....: In []: x= linspace(6, 6, 23) In []: y= sin(x)+ 4e1* rand(len(x)) 2e1 In []: x_f, y_f= linspace((3./ 2)* pi, (3./ 2)* pi, 4), array([1, 1, 1, 1]) In []: n, x_s= 5, linspace(6, 6, 123) In []: plot(x, y, 'bo', x_f, y_f, 'bs', x_s, sin(x_s), 'b') Out[]: <snip> In []: for M in 7** (arange(5)): ....: p= Polynomial(clsq(V(x, n), y, V(x_f, n), y_f, M)) ....: plot(x_s, p(x_s)) ....: Out[]: <snip> In []: ylim([2, 2]) Out[]: <snip> In []: show() Obviously this is not any 'silver bullet' solution, but simple enough ;) My 2 cents, eat
So I have two questions:
1. Does this make sense? Or will it be better to make it more explicit, with a 'fixed_points' keyword argument defaulting to None? 2. Once I have this implemented, documented and tested... How do I go about submitting it for consideration? Would a patch be the way to go, or should I fork?
Thanks,
Jaime
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
 (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Thu, Mar 7, 2013 at 9:22 AM, eat <e.antero.tammi@gmail.com> wrote:
Hi,
On Thu, Mar 7, 2013 at 1:52 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Tue, Mar 5, 2013 at 5:23 AM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
There are actually seven versions of polynomial fit, two for the usual polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e, and Laguerre ;)
Correct me if I am wrong, but the fitted function is the same regardless of the polynomial basis used. I don't know if there can be numerical stability issues, but chebfit(x, y, n) returns the same as poly2cheb(polyfit(x, y, n)).
In any case, with all the already existing support for these special polynomials, it wouldn't be too hard to set the problem up to calculate the right coefficients directly for each case.
How do you propose to implement it? I think Lagrange multipliers is overkill, I'd rather see using the weights (approximate) or change of variable  a permutation in this case  followed by qr and lstsq.
The weights method is already in place, but I find it rather inelegant and unsatisfactory as a solution to this problem. But if it is deemed sufficient, then there is of course no need to go any further.
I hadn't thought of any other way than using Lagrange multipliers, but looking at it in more detail, I am not sure it will be possible to formulate it in a manner that can be fed to lstsq, as polyfit does today. And if it can't, it probably wouldn't make much sense to have two different methods which cannot produce the same full output running under the same hood.
I can't figure out your "change of variable" method from the succinct description, could you elaborate a little more?
I think the place to add this is to lstsq as linear constraints. That is, the coefficients must satisfy B * c = y_c for some set of equations B. In the polynomial case the rows of B would be the powers of x at the points you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the design matrix of the unconstrained points A' = A * v.T so that B' becomes u * d. The coefficients are now replaced by new variables c' with the contraints in the first two columns. If there are, say, 2 constraints, u * d will be 2x2. Solve that equation for the first two constraints then multiply the first two columns of the design matrix A' by the result and put them on the rhs, i.e.,
y = y  A'[:, :2] * c'[:2]
then solve the usual l least squares thing with
A[:, 2:] * c'[2:] = y
to get the rest of the transformed coefficients c'. Put the coefficients altogether and multiply with v^T to get
c = v^T * c'
Very nice, and works beautifully! I have tried the method you describe, and there are a few relevant observations:
1. It gives the exact same result as the Lagrange multiplier approach, which is probably expected, but I wasn't all that sure it would be the case. 2. The result also seems to be to what the sequence of fits giving increasing weights to the fixed points converges to. This image http://i1092.photobucket.com/albums/i412/jfrio/image.png is an example. In there: * blue crosses are the data points to fit to * red points are the fixed points * blue line is the standard polyfit * red line is the constrained polyfit * cyan, magenta, yellow and black are polyfits with weights of 2, 4, 8, 16 for the fixed points, 1 for the rest
Seeing this last point, probably the cleanest, least disruptive implementation of this, would be to allow np.inf values in the weights parameter, which would get filtered out, and dealt with in the above manner.
Just to point out that a very simple approach is where one just multiply the constraints with big enough number M, like:
In []: def V(x, n= None): ....: """Polynomial package compatible Vandermonde 'matrix'""" ....: return vander(x, n)[:, ::1]
Just to note, there is a polyvander in numpy.polynomial.polynomial, and a chebvander in numpy.polynomial.chebyshev, etc. <snip> Chuck
Jaime, If you are going to work on this, you should also take a look at the recent thread http://mail.scipy.org/pipermail/numpydiscussion/2013February/065649.html, which is about the weighting function, which is in a confused state in the current version of polyfit. By the way, Numerical Recipes has a nice discussion both about fixing parameters and about weighting the data in different ways in polynomial least squares fitting. David On Mon, Mar 4, 2013 at 7:23 PM, Jaime Fernández del Río < jaime.frio@gmail.com> wrote:
A couple of days back, answering a question in StackExchange ( http://stackoverflow.com/a/15196628/110026), I found myself using Lagrange multipliers to fit a polynomial with least squares to data, making sure it went through some fixed points. This time it was relatively easy, because some 5 years ago I came across the same problem in real life, and spent the better part of a week banging my head against it. Even knowing what you are doing, it is far from simple, and in my own experience very useful: I think the only time ever I have fitted a polynomial to data with a definite purpose, it required that some points were fixed.
Seeing that polyfit is entirely coded in python, it would be relatively straightforward to add support for fixed points. It is also something I feel capable, and willing, of doing.
* Is such an additional feature something worthy of investigating, or will it never find its way into numpy.polyfit? * Any ideas on the best syntax for the extra parameters?
Thanks,
Jaime
 (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
participants (5)

Aron Ahmadia

Charles R Harris

David Pine

eat

Jaime Fernández del Río