[Numpy-discussion] numpy.polynomial.Polynomial

josef.pktd at gmail.com josef.pktd at gmail.com
Fri Oct 22 15:51:12 EDT 2010


On Fri, Oct 22, 2010 at 2:54 PM,  <josef.pktd at gmail.com> wrote:
> trying again, my reply got bounced back by postfix program if the mail
> delivery service
>
> On Fri, Oct 22, 2010 at 2:45 PM,  <josef.pktd at gmail.com> wrote:
>> On Fri, Oct 22, 2010 at 2:14 PM, Charles R Harris
>> <charlesr.harris at gmail.com> wrote:
>>>
>>>
>>> On Fri, Oct 22, 2010 at 11:47 AM, <josef.pktd at gmail.com> wrote:
>>>>
>>>> On Fri, Oct 22, 2010 at 12:26 PM, Charles R Harris
>>>> <charlesr.harris at gmail.com> wrote:
>>>> >
>>>> >
>>>> > On Fri, Oct 22, 2010 at 9:51 AM, <josef.pktd at gmail.com> wrote:
>>>> >>
>>>> >> I'm subclassing numpy.polynomial.Polynomial. So far it works well.
>>>> >>
>>>> >> One question on inplace changes
>>>> >>
>>>> >> Is it safe to change coef directly without creating a new instance?
>>>> >> I'm not trying to change anything else in the polynomial, just for
>>>> >> example pad, truncate or invert the coef inplace, e.g
>>>> >>
>>>> >> def pad(self, maxlag):
>>>> >>    self.coef = np.r_[self.coef, np.zeros(maxlag - len(self.coef))]
>>>> >>
>>>> >> Currently, I have rewritten this to return a new instance.
>>>> >>
>>>> >
>>>> > You can (currently) modify the coef and it should work, but I think it
>>>> > best
>>>> > to regard the Polynomial class as immutable. I'm even contemplating
>>>> > making
>>>> > the coef attribute read only just to avoid such things. Another tip is
>>>> > to
>>>> > use // instead of / for division, polynomials are rather like integers
>>>> > that
>>>> > way and don't have a true divide so plain old / will fail for python 3.x
>>>> >
>>>> > Note that most operations will trim trailing zeros off the result.
>>>> >
>>>> > In [6]: P((1,1,1,0,0,0))
>>>> > Out[6]: Polynomial([ 1.,  1.,  1.,  0.,  0.,  0.], [-1.,  1.])
>>>> >
>>>> > In [7]: P((1,1,1,0,0,0)) + 1
>>>> > Out[7]: Polynomial([ 2.,  1.,  1.], [-1.,  1.])
>>>> >
>>>> > The reason the constructor doesn't was because trailing zeros can be of
>>>> > interest in least squares fits. Is there a particular use case for which
>>>> > trailing zeros are important for you? The polynomial modules aren't
>>>> > finished
>>>> > products yet, I can still add some functionality if you think it useful.
>>>>
>>>> I need "long" division, example was A(L)/B(L) for lag-polynomials as I
>>>> showed before.
>>>>
>>>
>>> OK, I kinda thought that was what you wanted. It would be a version of
>>> "true" division, the missing pieces are how to extend that to other basis,
>>> there are several possibilities... But I suppose they could just be marked
>>> not implemented for the time being. There also needs to be a way to specify
>>> "precision" and the location of the "decimal" point.
>>
> As long as subclassing works and it seems to work well so far, adding
> a few topic specific methods is quite easy.
>
>>
>>>
>>>>
>>>> My current version (unfinished since I got distracted by
>>>> stats.distribution problems):
>>>>
>>>> from numpy import polynomial as npp
>>>>
>>>>
>>>> class LagPolynomial(npp.Polynomial):
>>>>
>>>>    #def __init__(self, maxlag):
>>>>
>>>>    def pad(self, maxlag):
>>>>        return LagPolynomial(np.r_[self.coef,
>>>> np.zeros(maxlag-len(self.coef))])
>>>>
>>>>    def padflip(self, maxlag):
>>>>        return LagPolynomial(np.r_[self.coef,
>>>> np.zeros(maxlag-len(self.coef))][::-1])
>>>>
>>>>    def flip(self):
>>>>        '''reverse polynomial coefficients
>>>>        '''
>>>>        return LagPolynomial(self.coef[::-1])
>>>>
>>>>    def div(self, other, maxlag=None):
>>>>        '''padded division, pads numerator with zeros to maxlag
>>>>        '''
>>>>        if maxlag is None:
>>>>            maxlag = max(len(self.coef), len(other.coef)) + 1
>>>>        return (self.padflip(maxlag) / other.flip()).flip()
>>>>
>>>>    def filter(self, arr):
>>>>        return (self * arr)  #trim to end
>>>>
>>>>
>>>> another method I haven't copied over yet is the adjusted fromroots
>>>> (normalized lag-polynomial from roots)
>>>>
>>>> Essentially, I want to do get the AR and ARMA processes in several
>>>> different ways because I don't trust (my interpretation) of any single
>>>> implementation and eventually to see which one is fastest.
>>>>
>>>
>>>
>>> I could also implement "polyz" polynomials that would use negative powers of
>>> z. The Chebyshev polynomials are currently implemented with symmetric
>>> z-series using both positive and negative powers, but I may change that.
>
>
>  My background for this is pretty much causal filters in time series
>  analysis. I still have only vague ideas about some of the signaling
>  and polynomial stuff discussed in the previous thread. But I take
>  whatever I can get, and can figure out how to use it.
>
>  The polynomial class (and my wrappers around scipy.signal and fft) is
>  nice because it allows almost literal translation of textbook
>  formulas. If I have enough time, spectral densities are one of the
>  next on the schedule.
>
>  Thanks, I will keep treating the Polynomials as immutable.
>
>  Josef
>
>
>>>
>>> Another possibility is some sort of factory function that emits polynomial
>>> classes with certain additional properties, I'm thinking of something like
>>> that for Jacobi polynomials.
>>>
>>> Chuck

(and if I'm allowed to do some advertising)

Here is the current version of the ARMA process class, currently using
just Polynomial (not yet LagP) as one representation.

http://bazaar.launchpad.net/~josef-pktd/statsmodels/statsmodels-josef-experimental-gsoc/annotate/head%3A/scikits/statsmodels/tsa/arima_process.py#L191

(Now I just have to figure out how to fix starting observations for
signal.lfilter and find my fft version)

Josef


>>>
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>



More information about the NumPy-Discussion mailing list