[SciPy-User] orthonormal polynomials (cont.)
Christopher Jordan-Squire
cjordan1 at uw.edu
Wed Jun 15 11:13:41 EDT 2011
If you google around, there are numerically stable versions of
Gram-Schmidt/QR facotorization and they're quite simple to implement. You
just have to be slightly careful and not code it up the way it's taught in a
first linear algebra course. However, the numerical instability can show up
even for small numbers of basis vectors; the issue isn't the number of basis
vectors but whether they're approximately linearly dependent.
-Chris JS
On Tue, Jun 14, 2011 at 7:18 PM, Charles R Harris <charlesr.harris at gmail.com
> wrote:
>
>
> On Tue, Jun 14, 2011 at 3:57 PM, <josef.pktd at gmail.com> wrote:
>
>> On Tue, Jun 14, 2011 at 5:48 PM, Charles R Harris
>> <charlesr.harris at gmail.com> wrote:
>> >
>> >
>> > On Tue, Jun 14, 2011 at 3:15 PM, Charles R Harris
>> > <charlesr.harris at gmail.com> wrote:
>> >>
>> >>
>> >> On Tue, Jun 14, 2011 at 3:01 PM, <josef.pktd at gmail.com> wrote:
>> >>>
>> >>> On Tue, Jun 14, 2011 at 4:40 PM, Charles R Harris
>> >>> <charlesr.harris at gmail.com> wrote:
>> >>> >
>> >>> >
>> >>> > On Tue, Jun 14, 2011 at 12:10 PM, nicky van foreest
>> >>> > <vanforeest at gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> Hi,
>> >>> >>
>> >>> >> Without understanding the details... I recall from numerical
>> recipes
>> >>> >> in C that Gram Schmidt is a very risky recipe. I don't know whether
>> >>> >> this advice also pertains to fitting polynomials, however,
>> >>>
>> >>> I read some warnings about the stability of Gram Schmidt, but the idea
>> >>> is that if I can choose the right weight function, then we should need
>> >>> only a few polynomials. So, I guess in that case numerical stability
>> >>> wouldn't be very relevant.
>> >>>
>> >>> >>
>> >>> >> Nicky
>> >>> >>
>> >>> >> On 14 June 2011 18:58, <josef.pktd at gmail.com> wrote:
>> >>> >> > (I'm continuing the story with orthogonal polynomial density
>> >>> >> > estimation, and found a nice new paper
>> >>> >> >
>> http://www.informaworld.com/smpp/content~db=all~content=a933669464 )
>> >>> >> >
>> >>> >> > Last time I managed to get orthonormal polynomials out of scipy
>> with
>> >>> >> > weight 1, and it worked well for density estimation.
>> >>> >> >
>> >>> >> > Now, I would like to construct my own orthonormal polynomials for
>> >>> >> > arbitrary weights. (The weights represent a base density around
>> >>> >> > which
>> >>> >> > we make the polynomial expansion).
>> >>> >> >
>> >>> >> > The reference refers to Gram-Schmidt or Emmerson recurrence.
>> >>> >> >
>> >>> >> > Is there a reasonably easy way to get the polynomial coefficients
>> >>> >> > for
>> >>> >> > this with numscipython?
>> >>> >> >
>> >>> >
>> >>> > What do you mean by 'polynomial'? If you want the values of a set of
>> >>> > polynomials orthonormal on a given set of points, you want the 'q'
>> in a
>> >>> > qr
>> >>> > factorization of a (row) weighted Vandermonde matrix. However, I
>> would
>> >>> > suggest using a weighted chebvander instead for numerical stability.
>> >>>
>> >>> Following your suggestion last time to use QR, I had figured out how
>> >>> to get the orthonormal basis for a given set of points.
>> >>> Now, I would like to get the functional version (not just for a given
>> >>> set of points), that is an orthonormal polynomial basis like Hermite,
>> >>> Legendre, Laguerre and Jacobi, only for any kind of weight function,
>> >>> where the weight function is chosen depending on the data.
>> >>>
>> >>
>> >> But in what basis? The columns of the inverse of 'R' in QR will give
>> you
>> >> the orthonormal polynomials as series in whatever basis you used for
>> the
>> >> columns of the pseudo-Vandermonde matrix.
>>
>> with basis you mean hear for example the power series (x**i,
>> i=0,1,..) that's what they use, but there is also a reference to
>> using fourier polynomials which I haven't looked at for this case.
>>
>> >>
>> >
>> > Example.
>> >
>> > In [1]: from numpy.polynomial.polynomial import polyvander
>> >
>> > In [2]: v = polyvander(linspace(-1, 1, 1000), 3)
>> >
>> > In [3]: around(inv(qr(v, mode='r'))*sqrt(1000./2), 5)
>> > Out[3]:
>> > array([[-0.70711, 0. , 0.79057, -0. ],
>> > [ 0. , 1.22352, -0. , -2.80345],
>> > [ 0. , 0. , -2.36697, 0. ],
>> > [ 0. , 0. , 0. , 4.66309]])
>> >
>> > The columns are approx. the coefficients of the normalized Legendre
>> > functions as a power series up to a sign.
>>
>> Looks interesting. It will take me a while to figure out what this
>> does, but I think I get the idea.
>>
>> Thanks,
>>
>>
> The normalization factor comes from integrating the columns of q, i.e.,
> \int p^2 ~= dt*\sum_i (q_{ij})^2 = 2/1000. I really should have weighted the
> first and last rows of the Vandermonde matrix by 1/sqrt(2) so that the
> integration was trapazoidal, but you get the idea.
>
> Chuck
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20110615/b955a224/attachment.html>
More information about the SciPy-User
mailing list