[SciPy-User] orthogonal polynomials ?

Charles R Harris charlesr.harris at gmail.com
Mon May 16 17:58:51 EDT 2011


On Mon, May 16, 2011 at 12:40 PM, <josef.pktd at gmail.com> wrote:

> On Mon, May 16, 2011 at 12:47 PM,  <josef.pktd at gmail.com> wrote:
> > On Mon, May 16, 2011 at 12:29 PM, Charles R Harris
> > <charlesr.harris at gmail.com> wrote:
> >>
> >>
> >> On Mon, May 16, 2011 at 10:22 AM, <josef.pktd at gmail.com> wrote:
> >>>
> >>> On Mon, May 16, 2011 at 11:27 AM, Charles R Harris
> >>> <charlesr.harris at gmail.com> wrote:
> >>> >
> >>> >
> >>> > On Mon, May 16, 2011 at 9:17 AM, Charles R Harris
> >>> > <charlesr.harris at gmail.com> wrote:
> >>> >>
> >>> >>
> >>> >> On Mon, May 16, 2011 at 9:11 AM, <josef.pktd at gmail.com> wrote:
> >>> >>>
> >>> >>> On Sat, May 14, 2011 at 6:04 PM, Charles R Harris
> >>> >>> <charlesr.harris at gmail.com> wrote:
> >>> >>> >
> >>> >>> >
> >>> >>> > On Sat, May 14, 2011 at 2:26 PM, <josef.pktd at gmail.com> wrote:
> >>> >>> >>
> >>> >>> >> On Sat, May 14, 2011 at 4:11 PM, nicky van foreest
> >>> >>> >> <vanforeest at gmail.com>
> >>> >>> >> wrote:
> >>> >>> >> > Hi,
> >>> >>> >> >
> >>> >>> >> > Might this be what you want:
> >>> >>> >> >
> >>> >>> >> > The first eleven probabilists' Hermite polynomials are:
> >>> >>> >> >
> >>> >>> >> > ...
> >>> >>> >> >
> >>> >>> >> > My chromium browser does not seem to paste pngs. Anyway, check
> >>> >>> >> >
> >>> >>> >> >
> >>> >>> >> > http://en.wikipedia.org/wiki/Hermite_polynomials
> >>> >>> >> >
> >>> >>> >> > and you'll see that the first polynomial is 1, the second x,
> and
> >>> >>> >> > so
> >>> >>> >> > forth. From my courses on quantum mechanics I recall that
> these
> >>> >>> >> > polynomials are, with respect to some weight function,
> >>> >>> >> > orthogonal.
> >>> >>> >>
> >>> >>> >> Thanks, I haven't looked at that yet, we should add wikipedia to
> >>> >>> >> the
> >>> >>> >> scipy.special docs.
> >>> >>> >>
> >>> >>> >> However, I would like to change the last part "with respect to
> some
> >>> >>> >> weight function"
> >>> >>> >> http://en.wikipedia.org/wiki/Hermite_polynomials#Orthogonality
> >>> >>> >>
> >>> >>> >> Instead of Gaussian weights I would like uniform weights on
> bounded
> >>> >>> >> support. And I have never seen anything about changing the
> weight
> >>> >>> >> function for the orthogonal basis of these kind of polynomials.
> >>> >>> >>
> >>> >>> >
> >>> >>> > In numpy 1.6, you can use the Legendre polynomials. They are
> >>> >>> > orthogonal
> >>> >>> > on
> >>> >>> > [-1,1] as has been mentioned, but can be mapped to other domains.
> >>> >>> > For
> >>> >>> > example
> >>> >>> >
> >>> >>> > In [1]: from numpy.polynomial import Legendre as L
> >>> >>> >
> >>> >>> > In [2]: for i in range(5): plot(*L([0]*i + [1],
> >>> >>> > domain=[0,1]).linspace())
> >>> >>> >    ...:
> >>> >>> >
> >>> >>> > produces the attached plots.
> >>> >>>
> >>> >>> I'm still on numpy 1.5 so this will have to wait a bit.
> >>> >>>
> >>> >>> > <snip>
> >>> >>> >
> >>> >>> > Chuck
> >>> >>> >
> >>> >>>
> >>> >>>
> >>> >>> as a first application for orthogonal polynomials I was trying to
> get
> >>> >>> an estimate for a density, but I haven't figured out the weighting
> >>> >>> yet.
> >>> >>>
> >>> >>> Fourier polynomials work better for this.
> >>> >>>
> >>> >>
> >>> >> You might want to try Chebyshev then, the Cheybyshev polynomialas
> are
> >>> >> essentially cosines and will handle the ends better. Weighting might
> >>> >> also
> >>> >> help, as I expect the distribution of the errors are somewhat
> Poisson.
> >>> >>
> >>> >
> >>> > I should mention that all the polynomial fits will give you the same
> >>> > results, but the Chebyshev fits are more numerically stable. The
> general
> >>> > approach is to overfit, i.e., use more polynomials than needed and
> then
> >>> > truncate the series resulting in a faux min/max approximation. Unlike
> >>> > power
> >>> > series, the coefficients of the Cheybshev series will tend to
> decrease
> >>> > rapidly at some point.
> >>>
> >>> I think I might have still something wrong with the way I use the
> >>> scipy.special polynomials
> >>>
> >>> for a large sample size with 10000 observations, the nice graph is
> >>> fourier with 20 elements, the second (not so nice) is with
> >>> scipy.special.chebyt with 500 polynomials. The graph for 20 Chebychev
> >>> polynomials looks very similar
> >>>
> >>> Chebychev doesn't want to get low enough to adjust to the low part.
> >>>
> >>> (Note: I rescale to [0,1] for fourier, and [-1,1] for chebyt)
> >>>
> >>
> >> That certainly doesn't look right. Could you mail me the data offline?
> Also,
> >> what are you fitting, the histogram, the cdf, or...?
> >
> > The numbers are generated (by a function in scikits.statsmodels). It's
> > all in the script that I posted, except I keep changing things.
> >
> > I'm not fitting anything directly.
> > There is supposed to be a closed form expression, the estimated
> > coefficient of each polynomial is just the mean of that polynomial
> > evaluated at the data. The theory and the descriptions sounded easy,
> > but unfortunately it didn't work out.
> >
> > I was just hoping to get lucky and that I'm able to skip the small print.
> > http://onlinelibrary.wiley.com/doi/10.1002/wics.97/abstract
> > got me started and it has the fourier case that works.
> >
> > There are lots of older papers that I only skimmed, but I should be
> > able to work out the Hermite case before going back to the general
> > case again with arbitrary orthogonal polynomial bases. (Initially I
> > wanted to ignore the Hermite bases because gaussian_kde works well in
> > that case.)
>
> Just another graph before stopping with this
>
> chebyt work if I cheat (rescale at the end
>
> f_hat = (f_hat - f_hat.min())
> fint2 = integrate.trapz(f_hat, grid)
> f_hat /= fint2
>
> graph is with chebyt with 30 polynomials after shifting and scaling,
> 20 polynomials looks also good.
>
> (hunting for the missing scaling term is left for some other day)
>
> In any case, there's a recipe for non-parametric density estimation
> with compact support.
>
>
Ah, now I see what is going on -- monte carlo integration to get the
expansion of the pdf in terms of orthogonal polynomials. So yes, I think
Lagrange polynomials are probably the ones to use unless you use the weight
in the integral. Note that 1.6 also has the Hermite and Laguerre
polynomials. But it seems that for these things it would also be desirable
to have the normalization constants.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20110516/fa4a4490/attachment.html>


More information about the SciPy-User mailing list