Volunteer for Scipy Project
Dear Sir/Ma'am
I was in e-mail contact with one of your colleagues , Travis ; I am very interested in contributing to scipy project as a volunteer. I believe that I can contribute in
(1) Documentation (2) Suggesting and developing newer functionality (3) Study similar software as Matlab, Mathematica, Maple etc trying to look for inspiration for possible improvements in scipy
Some of my dabbling python are discussed in my blogspot, http://programming-unlimited.blogspot.com/ and I am eager to be a part of scipy : a revolution in the making.
I look forward to your reply.
Kind regards
Arkapravo
On Fri, Oct 2, 2009 at 5:24 AM, Arkapravo Bhaumik < arkapravobhaumik@gmail.com> wrote:
Dear Sir/Ma'am
I was in e-mail contact with one of your colleagues , Travis ; I am very interested in contributing to scipy project as a volunteer. I believe that I can contribute in
(1) Documentation
Arkapravo, Great! As far as helping with the documentation is concerned, here's what you do: 0) Visit and read docs.scipy.org/numpy 1) As it says there, register a username at http://docs.scipy.org/numpy/accounts/register 2) Subscribe to scipy-dev@scipy.org at http://mail.scipy.org/mailman/listinfo/scipy-dev and email there the username created in Step 1) along with an explanation that you would like docstring editing privileges. (Subscribing is not strictly necessary - an alternative is described at 0) - but all numpy and scipy development and documentation issues are discussed there, so it really is the most convenient way to communicate with the development community.) 3) Once you receive confirmation of your editing privileges, you can login and visit http://docs.scipy.org/numpy/Milestones and http://docs.scipy.org/numpy/docs/ to see numpy docstrings that need editing (relatively few now) and http://docs.scipy.org/numpy/Milestones for scipy docstrings that need editing (still plentiful). (Before you edit, be sure to have read, at a minimum, http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#docstring-s... .) 4) Questions can be posted here, in the Discussion section following each individual docstring editing page, or at the Q+A page http://docs.scipy.org/nump/Questions+Answers/. Thanks again, David Goldsmith Technical Editor Olympia, Washington, USA
Hi Arkapravo, Arkapravo Bhaumik wrote:
Dear Sir/Ma'am
I was in e-mail contact with one of your colleagues , Travis ; I am very interested in contributing to scipy project as a volunteer. I believe that I can contribute in
(1) Documentation (2) Suggesting and developing newer functionality (3) Study similar software as Matlab, Mathematica, Maple etc trying to look for inspiration for possible improvements in scipy
Thanks for your help. As in most open source projects, the best way to contribute is to work on something that you actually need for yourself, be it code enhancement, bug fixes, documentation, etc... Depending on your interests and proficiency in python vs. C vs. Fortran, there are several things that could be worked on: the ones which need the most work ATM are scipy.special and scipy.interpolate. Please note also that you should be careful when 'studying' proprietary software - reimplementing from scratch a similar functionality is OK, re-using the implementation is almost never ok. cheers, David
2009/10/5 David Cournapeau <david@ar.media.kyoto-u.ac.jp>:
Hi Arkapravo,
Arkapravo Bhaumik wrote:
Dear Sir/Ma'am
I was in e-mail contact with one of your colleagues , Travis ; I am very interested in contributing to scipy project as a volunteer. I believe that I can contribute in
(1) Documentation (2) Suggesting and developing newer functionality (3) Study similar software as Matlab, Mathematica, Maple etc trying to look for inspiration for possible improvements in scipy
Thanks for your help. As in most open source projects, the best way to contribute is to work on something that you actually need for yourself, be it code enhancement, bug fixes, documentation, etc...
Depending on your interests and proficiency in python vs. C vs. Fortran, there are several things that could be worked on: the ones which need the most work ATM are scipy.special and scipy.interpolate.
Please note also that you should be careful when 'studying' proprietary software - reimplementing from scratch a similar functionality is OK, re-using the implementation is almost never ok.
I would add a mode of working that is definitely okay: find a reference to a journal article on how to compute the thing you're interested in (possibly by finding the reference in matlab/r/maple/mathematica/whatever source ordoumentation) and implementing what is described in the research paper. This is a great way to fill gaps in scipy with good, well-thought-out algorithms. Anne
Just curious, Anne: have you anything in particular in mind (i.e., are there some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant in the literature)? DG On Mon, Oct 5, 2009 at 1:40 PM, Anne Archibald <peridot.faceted@gmail.com>wrote:
2009/10/5 David Cournapeau <david@ar.media.kyoto-u.ac.jp>:
Hi Arkapravo,
Arkapravo Bhaumik wrote:
Dear Sir/Ma'am
I was in e-mail contact with one of your colleagues , Travis ; I am very interested in contributing to scipy project as a volunteer. I believe that I can contribute in
(1) Documentation (2) Suggesting and developing newer functionality (3) Study similar software as Matlab, Mathematica, Maple etc trying to look for inspiration for possible improvements in scipy
Thanks for your help. As in most open source projects, the best way to contribute is to work on something that you actually need for yourself, be it code enhancement, bug fixes, documentation, etc...
Depending on your interests and proficiency in python vs. C vs. Fortran, there are several things that could be worked on: the ones which need the most work ATM are scipy.special and scipy.interpolate.
Please note also that you should be careful when 'studying' proprietary software - reimplementing from scratch a similar functionality is OK, re-using the implementation is almost never ok.
I would add a mode of working that is definitely okay: find a reference to a journal article on how to compute the thing you're interested in (possibly by finding the reference in matlab/r/maple/mathematica/whatever source ordoumentation) and implementing what is described in the research paper. This is a great way to fill gaps in scipy with good, well-thought-out algorithms.
Anne _______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>:
Just curious, Anne: have you anything in particular in mind (i.e., are there some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant in the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I think I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is. For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful. Anne
On Mon, Oct 5, 2009 at 3:20 PM, Anne Archibald <peridot.faceted@gmail.com>wrote:
Just curious, Anne: have you anything in particular in mind (i.e., are
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>: there
some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant in the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I think I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is.
For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful.
At what order does the scipy implementation of the Chebyshev polynomials fall apart? I looked briefly at that package a long time ago, but never used it. I ask so I can check the chebyshev module that is going into numpy. Chuck
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 3:20 PM, Anne Archibald <peridot.faceted@gmail.com> wrote:
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>:
Just curious, Anne: have you anything in particular in mind (i.e., are there some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant in the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I think I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is.
For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful.
At what order does the scipy implementation of the Chebyshev polynomials fall apart? I looked briefly at that package a long time ago, but never used it. I ask so I can check the chebyshev module that is going into numpy.
By n=30 they are off by as much as 0.0018 on [-1,1]; n=31 they are off by 0.1, and by n=35 they are off by four - not great for values that should be in the interval [-1,1]. This may seem like an outrageously high degree for a polynomial, but there's no reason they need be this bad, and one could quite reasonably want to use an order this high, say for function approximation. I think this inaccuracy is probably inevitable in a scheme that computes values using a recurrence relation, and something like it probably occurs for all the orthogonal polynomials that don't have special-purpose evaluators. Anne
On Mon, Oct 5, 2009 at 7:29 PM, Anne Archibald <peridot.faceted@gmail.com>wrote:
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 3:20 PM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>:
Just curious, Anne: have you anything in particular in mind (i.e., are there some small - or gaping - holes in scipy (IYO, of course) which you
know
could be filled by a careful implementation of something(s) extant in the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I think I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is.
For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful.
At what order does the scipy implementation of the Chebyshev polynomials fall apart? I looked briefly at that package a long time ago, but never used it. I ask so I can check the chebyshev module that is going into numpy.
By n=30 they are off by as much as 0.0018 on [-1,1]; n=31 they are off by 0.1, and by n=35 they are off by four - not great for values that should be in the interval [-1,1]. This may seem like an outrageously high degree for a polynomial, but there's no reason they need be this bad, and one could quite reasonably want to use an order this high, say for function approximation.
Hmm, I get an maximum error of about 1e-13 for n=100 using my routine, which isn't great and can be improved a bit with a few tricks, but is probably good enough. That's using simple Clenshaw recursion for the evaluation. There must be something seriously wrong with the scipy version. I routinely use fits with n > 50 because truncating such a series gives a good approximation to the minmax fit and it's also nice to see how the coefficients fall off to estimate the size of the truncation error. I think this inaccuracy is probably inevitable in a scheme that
computes values using a recurrence relation
Not so, using the Cheybshev recurrence in either direction should be stable for |x| <= 1. It's like multiplying with a complex number of modulus 1, i.e., a unitary matrix.
and something like it probably occurs for all the orthogonal polynomials that don't have special-purpose evaluators.
Depends on the recursion, the direction of the recursion, and the domain. Chuck
Beside the obvious one of eventually requiring people to change their code (hopefully only very trivially), would there be any other negative ramifications to steering users (present and would-be) away from the scipy Cheb to the numpy Cheb (and eventually deprecating the former)? Or is there a good reason to maintain the same functionality in two separate namespaces? DG On Mon, Oct 5, 2009 at 8:12 PM, Charles R Harris <charlesr.harris@gmail.com>wrote:
On Mon, Oct 5, 2009 at 7:29 PM, Anne Archibald <peridot.faceted@gmail.com>wrote:
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 3:20 PM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>:
Just curious, Anne: have you anything in particular in mind (i.e.,
are
there some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant in the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I think I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is.
For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful.
At what order does the scipy implementation of the Chebyshev polynomials fall apart? I looked briefly at that package a long time ago, but never used it. I ask so I can check the chebyshev module that is going into numpy.
By n=30 they are off by as much as 0.0018 on [-1,1]; n=31 they are off by 0.1, and by n=35 they are off by four - not great for values that should be in the interval [-1,1]. This may seem like an outrageously high degree for a polynomial, but there's no reason they need be this bad, and one could quite reasonably want to use an order this high, say for function approximation.
Hmm, I get an maximum error of about 1e-13 for n=100 using my routine, which isn't great and can be improved a bit with a few tricks, but is probably good enough. That's using simple Clenshaw recursion for the evaluation. There must be something seriously wrong with the scipy version. I routinely use fits with n > 50 because truncating such a series gives a good approximation to the minmax fit and it's also nice to see how the coefficients fall off to estimate the size of the truncation error.
I think this inaccuracy is probably inevitable in a scheme that
computes values using a recurrence relation
Not so, using the Cheybshev recurrence in either direction should be stable for |x| <= 1. It's like multiplying with a complex number of modulus 1, i.e., a unitary matrix.
and something like it probably occurs for all the orthogonal polynomials that don't have special-purpose evaluators.
Depends on the recursion, the direction of the recursion, and the domain.
Chuck
_______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-dev
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 7:29 PM, Anne Archibald <peridot.faceted@gmail.com> wrote:
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 3:20 PM, Anne Archibald <peridot.faceted@gmail.com> wrote:
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>:
Just curious, Anne: have you anything in particular in mind (i.e., are there some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant in the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I think I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is.
For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful.
At what order does the scipy implementation of the Chebyshev polynomials fall apart? I looked briefly at that package a long time ago, but never used it. I ask so I can check the chebyshev module that is going into numpy.
By n=30 they are off by as much as 0.0018 on [-1,1]; n=31 they are off by 0.1, and by n=35 they are off by four - not great for values that should be in the interval [-1,1]. This may seem like an outrageously high degree for a polynomial, but there's no reason they need be this bad, and one could quite reasonably want to use an order this high, say for function approximation.
Hmm, I get an maximum error of about 1e-13 for n=100 using my routine, which isn't great and can be improved a bit with a few tricks, but is probably good enough. That's using simple Clenshaw recursion for the evaluation. There must be something seriously wrong with the scipy version. I routinely use fits with n > 50 because truncating such a series gives a good approximation to the minmax fit and it's also nice to see how the coefficients fall off to estimate the size of the truncation error.
Upon closer inspection, it seems that scipy starts from the recurrence relation but then uses it only to extract roots, which it then uses to construct a poly1d instance. (Which presumably then goes on to convert it into the 1, x, x**2, ... basis.) This is numerically disastrous. The reason it does this is because scipy doesn't (didn't) have a general-purpose polynomial class that can keep the polynomial in other representations. What do you think of the idea of a polynomial class that supports several different bases for the set of polynomials? (At least, the power basis and Chebyshev polynomials, but possibly also the other orthogonal polynomials. Translated and scaled bases might come in handy too.) Too general? Not general enough (barycentric representations, representation in terms of roots)? I think it could be done without too much difficulty. Anne
On Mon, Oct 5, 2009 at 10:14 PM, Anne Archibald <peridot.faceted@gmail.com>wrote:
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 7:29 PM, Anne Archibald <
wrote:
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 3:20 PM, Anne Archibald <peridot.faceted@gmail.com> wrote:
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>:
Just curious, Anne: have you anything in particular in mind (i.e., are there some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant
in
the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I
I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is.
For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful.
At what order does the scipy implementation of the Chebyshev
peridot.faceted@gmail.com> think polynomials
fall apart? I looked briefly at that package a long time ago, but never used it. I ask so I can check the chebyshev module that is going into numpy.
By n=30 they are off by as much as 0.0018 on [-1,1]; n=31 they are off by 0.1, and by n=35 they are off by four - not great for values that should be in the interval [-1,1]. This may seem like an outrageously high degree for a polynomial, but there's no reason they need be this bad, and one could quite reasonably want to use an order this high, say for function approximation.
Hmm, I get an maximum error of about 1e-13 for n=100 using my routine, which isn't great and can be improved a bit with a few tricks, but is probably good enough. That's using simple Clenshaw recursion for the evaluation. There must be something seriously wrong with the scipy version. I routinely use fits with n > 50 because truncating such a series gives a good approximation to the minmax fit and it's also nice to see how the coefficients fall off to estimate the size of the truncation error.
Upon closer inspection, it seems that scipy starts from the recurrence relation but then uses it only to extract roots, which it then uses to construct a poly1d instance. (Which presumably then goes on to convert it into the 1, x, x**2, ... basis.) This is numerically disastrous.
The reason it does this is because scipy doesn't (didn't) have a general-purpose polynomial class that can keep the polynomial in other representations. What do you think of the idea of a polynomial class that supports several different bases for the set of polynomials? (At least, the power basis and Chebyshev polynomials, but possibly also the other orthogonal polynomials. Translated and scaled bases might come in handy too.)
The new Chebyshev class has a domain [beg, end] that defines the translation and scaling. The polynomial class needs one also. In fact, I would like to define a Polynomial class to replace poly1d.
Too general? Not general enough (barycentric representations, representation in terms of roots)? I think it could be done without too much difficulty.
The difficulty is in between. I've been thinking of a base series class that would handle the domains and access to the coefficients, as well as addition/subtraction, scalar multiplication/division, and pickling. Derived classes could then overload the division/multiplication/integration/differentiation. Roots would come directly from the recursion relation. I don't know if the stability of the Clenshaw recursion (essentially synthetic division with the value as the remainder) is guaranteed for the polynomials/domains of interest, but we could make a list and look into it. Chuck
On Mon, Oct 5, 2009 at 10:14 PM, Anne Archibald <peridot.faceted@gmail.com>wrote:
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 7:29 PM, Anne Archibald <
wrote:
2009/10/5 Charles R Harris <charlesr.harris@gmail.com>:
On Mon, Oct 5, 2009 at 3:20 PM, Anne Archibald <peridot.faceted@gmail.com> wrote:
2009/10/5 David Goldsmith <d.l.goldsmith@gmail.com>:
Just curious, Anne: have you anything in particular in mind (i.e., are there some small - or gaping - holes in scipy (IYO, of course) which you know could be filled by a careful implementation of something(s) extant
in
the literature)?
Well, not exactly - the examples I had in mind were minor and/or in the past. I ran into problems with scipy's hyp2f1, for example, so I went and looked up the best algorithm I could find for it (and I
I contributed that code). I wanted the Kuiper test as an alternative to the Kolmogorov-Smirnov test (it's invariant under cyclic permutations, and is sensitive to different features of the distribution) so I looked up the test and the special function needed to interpret its results. (I haven't contributed this to scipy yet, mostly because I chose an interface that's not particularly compatible with that for scipy's K-S test.) And on a larger scale, that's what scipy.spatial's kdtree implementation is.
For examples where I think a bit of lit review plus implementation work might help, I'd say that the orthogonal polynomials could use some work - the generic implementation in scipy.special falls apart rapidly as you go to higher orders. I always implement my own Chebyshev polynomials using the cos(n*arccos(x)) expression, for example, and special implementations for the others might be very useful.
At what order does the scipy implementation of the Chebyshev
peridot.faceted@gmail.com> think polynomials
fall apart? I looked briefly at that package a long time ago, but never used it. I ask so I can check the chebyshev module that is going into numpy.
By n=30 they are off by as much as 0.0018 on [-1,1]; n=31 they are off by 0.1, and by n=35 they are off by four - not great for values that should be in the interval [-1,1]. This may seem like an outrageously high degree for a polynomial, but there's no reason they need be this bad, and one could quite reasonably want to use an order this high, say for function approximation.
Hmm, I get an maximum error of about 1e-13 for n=100 using my routine, which isn't great and can be improved a bit with a few tricks, but is probably good enough. That's using simple Clenshaw recursion for the evaluation. There must be something seriously wrong with the scipy version. I routinely use fits with n > 50 because truncating such a series gives a good approximation to the minmax fit and it's also nice to see how the coefficients fall off to estimate the size of the truncation error.
Upon closer inspection, it seems that scipy starts from the recurrence relation but then uses it only to extract roots, which it then uses to construct a poly1d instance. (Which presumably then goes on to convert it into the 1, x, x**2, ... basis.) This is numerically disastrous.
The reason it does this is because scipy doesn't (didn't) have a general-purpose polynomial class that can keep the polynomial in other representations. What do you think of the idea of a polynomial class that supports several different bases for the set of polynomials? (At least, the power basis and Chebyshev polynomials, but possibly also the other orthogonal polynomials. Translated and scaled bases might come in handy too.) Too general? Not general enough (barycentric representations, representation in terms of roots)?
Barycentric representation using the type II Chebyshev points would be the way to go for *all* the functions without singularities at the ends, except I couldn't figure out how to get the remainder during division, and the remainder is needed for converting to the coefficients of the various types of series -- it's just like converting a decimal representation to binary by repeated division by two and keeping the remainders. That doesn't mean it can't be done, however... Chuck
participants (5)
-
Anne Archibald
-
Arkapravo Bhaumik
-
Charles R Harris
-
David Cournapeau
-
David Goldsmith