Unrealistic expectations of class Polynomial or a bug?

Hi, Short demonstration of the issue: In []: sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0' In []: from numpy.polynomial import Polynomial as Poly In []: def p_tst(c): ..: p= Poly(c) ..: r= p.roots() ..: return sort(abs(p(r))) ..: Now I would expect a result more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 3.41987203e-07, 2.82123675e-03, 2.82123675e-03]) be the case, but actually most result seems to be more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 9.09325898e+13, 9.09325898e+13, 1.29387029e+72]) In []: p_tst(randn(123))[-3:] Out[]: array([ 8.60862087e-11, 8.60862087e-11, 6.58784520e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.00545673e-09, 3.25537709e+32, 3.25537709e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 3.22753481e-04, 1.87056454e+00, 1.87056454e+00]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.98556327e+08, 2.98556327e+08, 8.23588003e+12]) So, does this phenomena imply that - I'm testing with too high order polynomials (if so, does there exists a definite upper limit of polynomial order I'll not face this issue) or - it's just the 'nature' of computations with float values (if so, probably I should be able to tackle this regardless of the polynomial order) or - it's a nasty bug in class Polynomial Regards, eat

On Sat, Jan 28, 2012 at 11:15 AM, eat <e.antero.tammi@gmail.com> wrote:
Hi,
Short demonstration of the issue: In []: sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0'
In []: from numpy.polynomial import Polynomial as Poly In []: def p_tst(c): ..: p= Poly(c) ..: r= p.roots() ..: return sort(abs(p(r))) ..:
Now I would expect a result more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 3.41987203e-07, 2.82123675e-03, 2.82123675e-03])
be the case, but actually most result seems to be more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 9.09325898e+13, 9.09325898e+13, 1.29387029e+72]) In []: p_tst(randn(123))[-3:] Out[]: array([ 8.60862087e-11, 8.60862087e-11, 6.58784520e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.00545673e-09, 3.25537709e+32, 3.25537709e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 3.22753481e-04, 1.87056454e+00, 1.87056454e+00]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.98556327e+08, 2.98556327e+08, 8.23588003e+12])
So, does this phenomena imply that - I'm testing with too high order polynomials (if so, does there exists a definite upper limit of polynomial order I'll not face this issue) or - it's just the 'nature' of computations with float values (if so, probably I should be able to tackle this regardless of the polynomial order) or - it's a nasty bug in class Polynomial
It's a defect. You will get all the roots and the number will equal the degree. I haven't decided what the best way to deal with this is, but my thoughts have trended towards specifying an interval with the default being the domain. If you have other thoughts I'd be glad for the feedback. For the problem at hand, note first that you are specifying the coefficients, not the roots as was the case with poly1d. Second, as a rule of thumb, plain old polynomials will generally only be good for degree < 22 due to being numerically ill conditioned. If you are really looking to use high degrees, Chebyshev or Legendre will work better, although you will probably need to explicitly specify the domain. If you want to specify the polynomial using roots, do Poly.fromroots(...). Third, for the high degrees you are probably screwed anyway for degree 123, since the accuracy of the root finding will be limited, especially for roots that can cluster, and any root that falls even a little bit outside the interval [-1,1] (the default domain) is going to evaluate to a big number simply because the polynomial is going to h*ll at a rate you wouldn't believe ;) For evenly spaced roots in [-1, 1] and using Chebyshev polynomials, things look good for degree 50, get a bit loose at degree 75 but can be fixed up with one iteration of Newton, and blow up at degree 100. I think that's pretty good, actually, doing better would require a lot more work. There are some zero finding algorithms out there that might do better if someone wants to give it a shot. In [20]: p = Cheb.fromroots(linspace(-1, 1, 50)) In [21]: sort(abs(p(p.roots()))) Out[21]: array([ 6.20385459e-25, 1.65436123e-24, 2.06795153e-24, 5.79026429e-24, 5.89366186e-24, 6.44916482e-24, 6.44916482e-24, 6.77254127e-24, 6.97933642e-24, 7.25459208e-24, 1.00295649e-23, 1.37391414e-23, 1.37391414e-23, 1.63368171e-23, 2.39882378e-23, 3.30872245e-23, 4.38405725e-23, 4.49502653e-23, 4.49502653e-23, 5.58346913e-23, 8.35452419e-23, 9.38407760e-23, 9.38407760e-23, 1.03703218e-22, 1.03703218e-22, 1.23249911e-22, 1.75197880e-22, 1.75197880e-22, 3.07711188e-22, 3.09821786e-22, 3.09821786e-22, 4.56625520e-22, 4.56625520e-22, 4.69638303e-22, 4.69638303e-22, 5.96448724e-22, 5.96448724e-22, 1.24076485e-21, 1.24076485e-21, 1.59972624e-21, 1.59972624e-21, 1.62930347e-21, 1.62930347e-21, 1.73773328e-21, 1.73773328e-21, 1.87935435e-21, 2.30287083e-21, 2.48815928e-21, 2.85411753e-21, 2.85411753e-21]) Chuck

On Sat, Jan 28, 2012 at 11:14 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Sat, Jan 28, 2012 at 11:15 AM, eat <e.antero.tammi@gmail.com> wrote:
Hi,
Short demonstration of the issue: In []: sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0'
In []: from numpy.polynomial import Polynomial as Poly In []: def p_tst(c): ..: p= Poly(c) ..: r= p.roots() ..: return sort(abs(p(r))) ..:
Now I would expect a result more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 3.41987203e-07, 2.82123675e-03, 2.82123675e-03])
be the case, but actually most result seems to be more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 9.09325898e+13, 9.09325898e+13, 1.29387029e+72]) In []: p_tst(randn(123))[-3:] Out[]: array([ 8.60862087e-11, 8.60862087e-11, 6.58784520e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.00545673e-09, 3.25537709e+32, 3.25537709e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 3.22753481e-04, 1.87056454e+00, 1.87056454e+00]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.98556327e+08, 2.98556327e+08, 8.23588003e+12])
So, does this phenomena imply that - I'm testing with too high order polynomials (if so, does there exists a definite upper limit of polynomial order I'll not face this issue) or - it's just the 'nature' of computations with float values (if so, probably I should be able to tackle this regardless of the polynomial order) or - it's a nasty bug in class Polynomial
It's a defect. You will get all the roots and the number will equal the degree. I haven't decided what the best way to deal with this is, but my thoughts have trended towards specifying an interval with the default being the domain. If you have other thoughts I'd be glad for the feedback.
For the problem at hand, note first that you are specifying the coefficients, not the roots as was the case with poly1d. Second, as a rule of thumb, plain old polynomials will generally only be good for degree < 22 due to being numerically ill conditioned. If you are really looking to use high degrees, Chebyshev or Legendre will work better, although you will probably need to explicitly specify the domain. If you want to specify the polynomial using roots, do Poly.fromroots(...). Third, for the high degrees you are probably screwed anyway for degree 123, since the accuracy of the root finding will be limited, especially for roots that can cluster, and any root that falls even a little bit outside the interval [-1,1] (the default domain) is going to evaluate to a big number simply because the polynomial is going to h*ll at a rate you wouldn't believe ;)
For evenly spaced roots in [-1, 1] and using Chebyshev polynomials, things look good for degree 50, get a bit loose at degree 75 but can be fixed up with one iteration of Newton, and blow up at degree 100. I think that's pretty good, actually, doing better would require a lot more work. There are some zero finding algorithms out there that might do better if someone wants to give it a shot.
In [20]: p = Cheb.fromroots(linspace(-1, 1, 50))
In [21]: sort(abs(p(p.roots()))) Out[21]: array([ 6.20385459e-25, 1.65436123e-24, 2.06795153e-24, 5.79026429e-24, 5.89366186e-24, 6.44916482e-24, 6.44916482e-24, 6.77254127e-24, 6.97933642e-24, 7.25459208e-24, 1.00295649e-23, 1.37391414e-23, 1.37391414e-23, 1.63368171e-23, 2.39882378e-23, 3.30872245e-23, 4.38405725e-23, 4.49502653e-23, 4.49502653e-23, 5.58346913e-23, 8.35452419e-23, 9.38407760e-23, 9.38407760e-23, 1.03703218e-22, 1.03703218e-22, 1.23249911e-22, 1.75197880e-22, 1.75197880e-22, 3.07711188e-22, 3.09821786e-22, 3.09821786e-22, 4.56625520e-22, 4.56625520e-22, 4.69638303e-22, 4.69638303e-22, 5.96448724e-22, 5.96448724e-22, 1.24076485e-21, 1.24076485e-21, 1.59972624e-21, 1.59972624e-21, 1.62930347e-21, 1.62930347e-21, 1.73773328e-21, 1.73773328e-21, 1.87935435e-21, 2.30287083e-21, 2.48815928e-21, 2.85411753e-21, 2.85411753e-21])
Thanks, for a very informative feedback. I'll study those orthogonal polynomials more detail. Regards, - eat
Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

On Sun, Jan 29, 2012 at 10:03 AM, eat <e.antero.tammi@gmail.com> wrote:
On Sat, Jan 28, 2012 at 11:14 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Sat, Jan 28, 2012 at 11:15 AM, eat <e.antero.tammi@gmail.com> wrote:
Hi,
Short demonstration of the issue: In []: sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0'
In []: from numpy.polynomial import Polynomial as Poly In []: def p_tst(c): ..: p= Poly(c) ..: r= p.roots() ..: return sort(abs(p(r))) ..:
Now I would expect a result more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 3.41987203e-07, 2.82123675e-03, 2.82123675e-03])
be the case, but actually most result seems to be more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 9.09325898e+13, 9.09325898e+13, 1.29387029e+72]) In []: p_tst(randn(123))[-3:] Out[]: array([ 8.60862087e-11, 8.60862087e-11, 6.58784520e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.00545673e-09, 3.25537709e+32, 3.25537709e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 3.22753481e-04, 1.87056454e+00, 1.87056454e+00]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.98556327e+08, 2.98556327e+08, 8.23588003e+12])
So, does this phenomena imply that - I'm testing with too high order polynomials (if so, does there exists a definite upper limit of polynomial order I'll not face this issue) or - it's just the 'nature' of computations with float values (if so, probably I should be able to tackle this regardless of the polynomial order) or - it's a nasty bug in class Polynomial
It's a defect. You will get all the roots and the number will equal the degree. I haven't decided what the best way to deal with this is, but my thoughts have trended towards specifying an interval with the default being the domain. If you have other thoughts I'd be glad for the feedback.
For the problem at hand, note first that you are specifying the coefficients, not the roots as was the case with poly1d. Second, as a rule of thumb, plain old polynomials will generally only be good for degree < 22 due to being numerically ill conditioned. If you are really looking to use high degrees, Chebyshev or Legendre will work better, although you will probably need to explicitly specify the domain. If you want to specify the polynomial using roots, do Poly.fromroots(...). Third, for the high degrees you are probably screwed anyway for degree 123, since the accuracy of the root finding will be limited, especially for roots that can cluster, and any root that falls even a little bit outside the interval [-1,1] (the default domain) is going to evaluate to a big number simply because the polynomial is going to h*ll at a rate you wouldn't believe ;)
For evenly spaced roots in [-1, 1] and using Chebyshev polynomials, things look good for degree 50, get a bit loose at degree 75 but can be fixed up with one iteration of Newton, and blow up at degree 100. I think that's pretty good, actually, doing better would require a lot more work. There are some zero finding algorithms out there that might do better if someone wants to give it a shot.
In [20]: p = Cheb.fromroots(linspace(-1, 1, 50))
In [21]: sort(abs(p(p.roots()))) Out[21]: array([ 6.20385459e-25, 1.65436123e-24, 2.06795153e-24, 5.79026429e-24, 5.89366186e-24, 6.44916482e-24, 6.44916482e-24, 6.77254127e-24, 6.97933642e-24, 7.25459208e-24, 1.00295649e-23, 1.37391414e-23, 1.37391414e-23, 1.63368171e-23, 2.39882378e-23, 3.30872245e-23, 4.38405725e-23, 4.49502653e-23, 4.49502653e-23, 5.58346913e-23, 8.35452419e-23, 9.38407760e-23, 9.38407760e-23, 1.03703218e-22, 1.03703218e-22, 1.23249911e-22, 1.75197880e-22, 1.75197880e-22, 3.07711188e-22, 3.09821786e-22, 3.09821786e-22, 4.56625520e-22, 4.56625520e-22, 4.69638303e-22, 4.69638303e-22, 5.96448724e-22, 5.96448724e-22, 1.24076485e-21, 1.24076485e-21, 1.59972624e-21, 1.59972624e-21, 1.62930347e-21, 1.62930347e-21, 1.73773328e-21, 1.73773328e-21, 1.87935435e-21, 2.30287083e-21, 2.48815928e-21, 2.85411753e-21, 2.85411753e-21])
Thanks,
for a very informative feedback. I'll study those orthogonal polynomials more detail.
That said, I'm thinking it might be possible to get a more accurate polynomial representation from the zeros by going through a barycentric form rather than simply multiplying the factors together as is done now. Hmm... For evenly spaced roots the polynomial grows in amplitude rapidly at the ends which leads to numerical problems because a small error in the zeros turns into a large error in value because of the steepness of the curve at the zeroes. I've attached a semilogy plot of the absolute values of the polynomial with 30 equally spaced zeroes from -1 to 1. Chuck

On Mon, Jan 30, 2012 at 6:55 AM, Charles R Harris <charlesr.harris@gmail.com
wrote:
On Sun, Jan 29, 2012 at 10:03 AM, eat <e.antero.tammi@gmail.com> wrote:
On Sat, Jan 28, 2012 at 11:14 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Sat, Jan 28, 2012 at 11:15 AM, eat <e.antero.tammi@gmail.com> wrote:
Hi,
Short demonstration of the issue: In []: sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0'
In []: from numpy.polynomial import Polynomial as Poly In []: def p_tst(c): ..: p= Poly(c) ..: r= p.roots() ..: return sort(abs(p(r))) ..:
Now I would expect a result more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 3.41987203e-07, 2.82123675e-03, 2.82123675e-03])
be the case, but actually most result seems to be more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 9.09325898e+13, 9.09325898e+13, 1.29387029e+72]) In []: p_tst(randn(123))[-3:] Out[]: array([ 8.60862087e-11, 8.60862087e-11, 6.58784520e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.00545673e-09, 3.25537709e+32, 3.25537709e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 3.22753481e-04, 1.87056454e+00, 1.87056454e+00]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.98556327e+08, 2.98556327e+08, 8.23588003e+12])
So, does this phenomena imply that - I'm testing with too high order polynomials (if so, does there exists a definite upper limit of polynomial order I'll not face this issue) or - it's just the 'nature' of computations with float values (if so, probably I should be able to tackle this regardless of the polynomial order) or - it's a nasty bug in class Polynomial
It's a defect. You will get all the roots and the number will equal the degree. I haven't decided what the best way to deal with this is, but my thoughts have trended towards specifying an interval with the default being the domain. If you have other thoughts I'd be glad for the feedback.
For the problem at hand, note first that you are specifying the coefficients, not the roots as was the case with poly1d. Second, as a rule of thumb, plain old polynomials will generally only be good for degree < 22 due to being numerically ill conditioned. If you are really looking to use high degrees, Chebyshev or Legendre will work better, although you will probably need to explicitly specify the domain. If you want to specify the polynomial using roots, do Poly.fromroots(...). Third, for the high degrees you are probably screwed anyway for degree 123, since the accuracy of the root finding will be limited, especially for roots that can cluster, and any root that falls even a little bit outside the interval [-1,1] (the default domain) is going to evaluate to a big number simply because the polynomial is going to h*ll at a rate you wouldn't believe ;)
For evenly spaced roots in [-1, 1] and using Chebyshev polynomials, things look good for degree 50, get a bit loose at degree 75 but can be fixed up with one iteration of Newton, and blow up at degree 100. I think that's pretty good, actually, doing better would require a lot more work. There are some zero finding algorithms out there that might do better if someone wants to give it a shot.
In [20]: p = Cheb.fromroots(linspace(-1, 1, 50))
In [21]: sort(abs(p(p.roots()))) Out[21]: array([ 6.20385459e-25, 1.65436123e-24, 2.06795153e-24, 5.79026429e-24, 5.89366186e-24, 6.44916482e-24, 6.44916482e-24, 6.77254127e-24, 6.97933642e-24, 7.25459208e-24, 1.00295649e-23, 1.37391414e-23, 1.37391414e-23, 1.63368171e-23, 2.39882378e-23, 3.30872245e-23, 4.38405725e-23, 4.49502653e-23, 4.49502653e-23, 5.58346913e-23, 8.35452419e-23, 9.38407760e-23, 9.38407760e-23, 1.03703218e-22, 1.03703218e-22, 1.23249911e-22, 1.75197880e-22, 1.75197880e-22, 3.07711188e-22, 3.09821786e-22, 3.09821786e-22, 4.56625520e-22, 4.56625520e-22, 4.69638303e-22, 4.69638303e-22, 5.96448724e-22, 5.96448724e-22, 1.24076485e-21, 1.24076485e-21, 1.59972624e-21, 1.59972624e-21, 1.62930347e-21, 1.62930347e-21, 1.73773328e-21, 1.73773328e-21, 1.87935435e-21, 2.30287083e-21, 2.48815928e-21, 2.85411753e-21, 2.85411753e-21])
Thanks,
for a very informative feedback. I'll study those orthogonal polynomials more detail.
That said, I'm thinking it might be possible to get a more accurate polynomial representation from the zeros by going through a barycentric form rather than simply multiplying the factors together as is done now. Hmm...
For evenly spaced roots the polynomial grows in amplitude rapidly at the ends which leads to numerical problems because a small error in the zeros turns into a large error in value because of the steepness of the curve at the zeroes. I've attached a semilogy plot of the absolute values of the polynomial with 30 equally spaced zeroes from -1 to 1.
I've attached a plot of the Chebyshev coefficients for the monic polynomial with 50 zeros evenly spaced from -1, 1. The odd coefficients should be zero, so their value tells you what the error in the coefficient determination was (I used Gauss-Chebyshev integration). The value of the resulting Chebyshev series cannot be evaluated with sufficient accuracy in double precision due to the dynamic range of the coefficients and I expect that simple inability of double precision to correctly represent the values extends to the root finding. Chuck

On Mon, Jan 30, 2012 at 1:15 PM, Charles R Harris <charlesr.harris@gmail.com
wrote:
On Mon, Jan 30, 2012 at 6:55 AM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Sun, Jan 29, 2012 at 10:03 AM, eat <e.antero.tammi@gmail.com> wrote:
On Sat, Jan 28, 2012 at 11:14 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Sat, Jan 28, 2012 at 11:15 AM, eat <e.antero.tammi@gmail.com> wrote:
Hi,
Short demonstration of the issue: In []: sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0'
In []: from numpy.polynomial import Polynomial as Poly In []: def p_tst(c): ..: p= Poly(c) ..: r= p.roots() ..: return sort(abs(p(r))) ..:
Now I would expect a result more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 3.41987203e-07, 2.82123675e-03, 2.82123675e-03])
be the case, but actually most result seems to be more like: In []: p_tst(randn(123))[-3:] Out[]: array([ 9.09325898e+13, 9.09325898e+13, 1.29387029e+72]) In []: p_tst(randn(123))[-3:] Out[]: array([ 8.60862087e-11, 8.60862087e-11, 6.58784520e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.00545673e-09, 3.25537709e+32, 3.25537709e+32]) In []: p_tst(randn(123))[-3:] Out[]: array([ 3.22753481e-04, 1.87056454e+00, 1.87056454e+00]) In []: p_tst(randn(123))[-3:] Out[]: array([ 2.98556327e+08, 2.98556327e+08, 8.23588003e+12])
So, does this phenomena imply that - I'm testing with too high order polynomials (if so, does there exists a definite upper limit of polynomial order I'll not face this issue) or - it's just the 'nature' of computations with float values (if so, probably I should be able to tackle this regardless of the polynomial order) or - it's a nasty bug in class Polynomial
It's a defect. You will get all the roots and the number will equal the degree. I haven't decided what the best way to deal with this is, but my thoughts have trended towards specifying an interval with the default being the domain. If you have other thoughts I'd be glad for the feedback.
For the problem at hand, note first that you are specifying the coefficients, not the roots as was the case with poly1d. Second, as a rule of thumb, plain old polynomials will generally only be good for degree < 22 due to being numerically ill conditioned. If you are really looking to use high degrees, Chebyshev or Legendre will work better, although you will probably need to explicitly specify the domain. If you want to specify the polynomial using roots, do Poly.fromroots(...). Third, for the high degrees you are probably screwed anyway for degree 123, since the accuracy of the root finding will be limited, especially for roots that can cluster, and any root that falls even a little bit outside the interval [-1,1] (the default domain) is going to evaluate to a big number simply because the polynomial is going to h*ll at a rate you wouldn't believe ;)
For evenly spaced roots in [-1, 1] and using Chebyshev polynomials, things look good for degree 50, get a bit loose at degree 75 but can be fixed up with one iteration of Newton, and blow up at degree 100. I think that's pretty good, actually, doing better would require a lot more work. There are some zero finding algorithms out there that might do better if someone wants to give it a shot.
In [20]: p = Cheb.fromroots(linspace(-1, 1, 50))
In [21]: sort(abs(p(p.roots()))) Out[21]: array([ 6.20385459e-25, 1.65436123e-24, 2.06795153e-24, 5.79026429e-24, 5.89366186e-24, 6.44916482e-24, 6.44916482e-24, 6.77254127e-24, 6.97933642e-24, 7.25459208e-24, 1.00295649e-23, 1.37391414e-23, 1.37391414e-23, 1.63368171e-23, 2.39882378e-23, 3.30872245e-23, 4.38405725e-23, 4.49502653e-23, 4.49502653e-23, 5.58346913e-23, 8.35452419e-23, 9.38407760e-23, 9.38407760e-23, 1.03703218e-22, 1.03703218e-22, 1.23249911e-22, 1.75197880e-22, 1.75197880e-22, 3.07711188e-22, 3.09821786e-22, 3.09821786e-22, 4.56625520e-22, 4.56625520e-22, 4.69638303e-22, 4.69638303e-22, 5.96448724e-22, 5.96448724e-22, 1.24076485e-21, 1.24076485e-21, 1.59972624e-21, 1.59972624e-21, 1.62930347e-21, 1.62930347e-21, 1.73773328e-21, 1.73773328e-21, 1.87935435e-21, 2.30287083e-21, 2.48815928e-21, 2.85411753e-21, 2.85411753e-21])
Thanks,
for a very informative feedback. I'll study those orthogonal polynomials more detail.
That said, I'm thinking it might be possible to get a more accurate polynomial representation from the zeros by going through a barycentric form rather than simply multiplying the factors together as is done now. Hmm...
For evenly spaced roots the polynomial grows in amplitude rapidly at the ends which leads to numerical problems because a small error in the zeros turns into a large error in value because of the steepness of the curve at the zeroes. I've attached a semilogy plot of the absolute values of the polynomial with 30 equally spaced zeroes from -1 to 1.
I've attached a plot of the Chebyshev coefficients for the monic polynomial with 50 zeros evenly spaced from -1, 1. The odd coefficients should be zero, so their value tells you what the error in the coefficient determination was (I used Gauss-Chebyshev integration). The value of the resulting Chebyshev series cannot be evaluated with sufficient accuracy in double precision due to the dynamic range of the coefficients and I expect that simple inability of double precision to correctly represent the values extends to the root finding.
Oops, that was erroneous. The proximate cause of the problem seems to be poor precision in obtaining the coefficients from the roots. That can be improved. I've attached a few more plots ;) Chuck
participants (2)
-
Charles R Harris
-
eat