I am trying out the integrate.quadrature function on the function f(x)=e**x to the left of the y-axis. If the lower bound in not too negative, I get a reasonable answer, but if the lower bound is too negative, I get 0.0 as the value of the integral. Here is the code: from scipy import * def f(x): return e**x integrate.quadrature(f,-10.0,0.0) # answer is (0.999954600065, 3.14148596026e-010) but integrate.quadrature(f,-1000.0,0.0) # yields (8.35116510531e-090, 8.35116510531e-090) Note that 'val' and 'err' are equal. Is this a bug in quadrature? Thanks. Dan
2008/9/14 Dan Murphy <chiefmurph@comcast.net>:
I am trying out the integrate.quadrature function on the function f(x)=e**x to the left of the y-axis. If the lower bound in not too negative, I get a reasonable answer, but if the lower bound is too negative, I get 0.0 as the value of the integral. Here is the code:
from scipy import *
def f(x):
return e**x
integrate.quadrature(f,-10.0,0.0) # answer is (0.999954600065, 3.14148596026e-010)
but
integrate.quadrature(f,-1000.0,0.0) # yields (8.35116510531e-090, 8.35116510531e-090)
Note that 'val' and 'err' are equal. Is this a bug in quadrature?
No, unfortunately. It is a limitation of numerical quadrature in general. Specifically, no matter how adaptive the algorithm is, it can only base its result on a finite number of sampled points of the function. If these points are all zero to numerical accuracy, then the answer must be zero. So if you imagine those samples are taken at the midpoints of 10 intervals evenly spaced between -1000 and 0, then the rightmost one returns a value of e**(-50), which is as close to zero as makes no nevermind. You might be all right if this were an adaptive scheme and if it used the endpoints, since one endpoint is guaranteed to give you one. But not using the endpoints is a design feature of some numerical integration schemes. The take-home lesson is that you can't just use numerical quadrature systems blindly; you have to know the features and limitations of the particular one you're using. Gaussian quadrature can be very accurate for smooth functions, but it has a very specific domain of applicability. scipy.optimize.quad is a little more general-purpose by intent (and necessarily a little less efficient when Gaussian quadrature will do) but it can be tricked too. A more specific take-home lesson is to try to normalize your problem as much as possible, so that all quantities you feed your integrator are of order unity. Yes, it's a pain to have to handle scale factors yourself, particularly in the normal case when you're solving a family of related problems. But you'll get much more reliable performance. Anne
Thanks for the clear explanation, Anne. If I understand the second take-home lesson -- feed integrators unity-order quantities -- then instead of calling quadrature as I did: integrate.quadrature(f,-1000.0,0.0) I should scale my function f so that my call looks more like integrate.quadrature(f,-1.0,0.0) Was that what you meant? Also, when you say "scipy.optimize.quad is a little more general-purpose" did you mean "scipy.integrate.quad"? Indeed, the call integrate(quad(f,-1000.0,0.0) worked great! Thanks, and sorry for the late followup. Dan Murphy
Date: Wed, 17 Sep 2008 12:00:09 -0400> From: peridot.faceted@gmail.com> To: scipy-user@scipy.org> Subject: Re: [SciPy-user] Gaussian quadrature error> > 2008/9/14 Dan Murphy <chiefmurph@comcast.net>:> > I am trying out the integrate.quadrature function on the function f(x)=e**x> > to the left of the y-axis. If the lower bound in not too negative, I get a> > reasonable answer, but if the lower bound is too negative, I get 0.0 as the> > value of the integral. Here is the code:> >> >> >> > from scipy import *> >> >> >> > def f(x):> >> > return e**x> >> >> >> > integrate.quadrature(f,-10.0,0.0) # answer is (0.999954600065,> > 3.14148596026e-010)> >> >> >> > but> >> >> >> > integrate.quadrature(f,-1000.0,0.0) # yields (8.35116510531e-090,> > 8.35116510531e-090)> >> >> >> > Note that 'val' and 'err' are equal. Is this a bug in quadrature?> > No, unfortunately. It is a limitation of numerical quadrature in> general. Specifically, no matter how adaptive the algorithm is, it can> only base its result on a finite number of sampled points of the> function. If these points are all zero to numerical accuracy, then the> answer must be zero. So if you imagine those samples are taken at the> midpoints of 10 intervals evenly spaced between -1000 and 0, then the> rightmost one returns a value of e**(-50), which is as close to zero> as makes no nevermind. You might be all right if this were an adaptive> scheme and if it used the endpoints, since one endpoint is guaranteed> to give you one. But not using the endpoints is a design feature of> some numerical integration schemes.> > The take-home lesson is that you can't just use numerical quadrature> systems blindly; you have to know the features and limitations of the> particular one you're using. Gaussian quadrature can be very accurate> for smooth functions, but it has a very specific domain of> applicability. scipy.optimize.quad is a little more general-purpose by> intent (and necessarily a little less efficient when Gaussian> quadrature will do) but it can be tricked too.> > A more specific take-home lesson is to try to normalize your problem> as much as possible, so that all quantities you feed your integrator> are of order unity. Yes, it's a pain to have to handle scale factors> yourself, particularly in the normal case when you're solving a family> of related problems. But you'll get much more reliable performance.> > Anne> _______________________________________________> SciPy-user mailing list> SciPy-user@scipy.org> http://projects.scipy.org/mailman/listinfo/scipy-user
Stay up to date on your PC, the Web, and your mobile phone with Windows Live. http://clk.atdmt.com/MRT/go/msnnkwxp1020093185mrt/direct/01/
2008/10/6 Dan Murphy <chiefmurph@hotmail.com>:
Thanks for the clear explanation, Anne. If I understand the second take-home lesson -- feed integrators unity-order quantities -- then instead of calling quadrature as I did: integrate.quadrature(f,-1000.0,0.0) I should scale my function f so that my call looks more like integrate.quadrature(f,-1.0,0.0) Was that what you meant?
Yes, though with the particular function you used - exp(x) - you then have the problem of exceedingly rapid changes within the region of integration. If what you actually wanted was to integrate from -infinity to zero, then you might do better to rescale your x coordinate nonlinearly, say by x' = 1/(1-x) so that the integration becomes one from zero to one without a sharp edge near one side. A further take-home lesson is that Gaussian quadrature is extremely efficient for functions that behave like high-order polynomials - which exp(x) does on small intervals but not on large intervals. However, you can use customized Gaussian quadrature based on a different set of polynomials to integrate functions that look like a polynomial times a weight factor.
Also, when you say "scipy.optimize.quad is a little more general-purpose" did you mean "scipy.integrate.quad"? Indeed, the call integrate(quad(f,-1000.0,0.0) worked great!
Oops. Yes. It's still possible to trick it, but scipy.integrate.quad is based on old, well-tested code that is designed to be fairly robust against peculiar functions. It will be a little slower than Gaussian quadrature for functions that are extremely smooth, but then Gaussian quadrature suffers very badly when handed functions like abs(x). Anne
participants (3)
-
Anne Archibald -
Dan Murphy -
Dan Murphy