[Tutor] hi
Oscar Benjamin
oscar.j.benjamin at gmail.com
Sat Aug 10 17:10:09 CEST 2013
On 10 August 2013 07:14, Vick <vick1975 at orange.mu> wrote:
>> From: Oscar Benjamin [mailto:oscar.j.benjamin at gmail.com]
>
>> But for what problem? Different ODE solvers are better at solving
> different
>> types of ODEs, or even at finding different solutions of the same ODEs. Do
>> you mean the best solver for non-smooth problems, for stiff problems, for
>> getting hyper-accurate solutions to super-smooth problems, etc.?
>> Even the simplest solvers can usually be made as accurate as desired by
>> reducing the step-size and increasing the precision. When people talk
> about
>> one solver as being more accurate than another they usually mean that it
> is
>> better accuracy for a given precision (and a particular ODE, initial
> condition
>> etc.) or that it achieves a better accuracy given the same amount of
>> computation time. The question you've asked is ill-posed as we can easily
>> construct a solver to any arbitrary degree of accuracy (for well-behaved
>> ODEs).
>
> Yes I understand, but as you hinted above, any solver can reach any
> desired accuracy just by increasing their step-size. However by doing so, a
> low-order solver will surely tax your computer memory and it will only reach
> a solution after a long time. However a high-order solver will find a
> solution with more accuracy in less time and by utilizing less memory.
Why would it use more memory? You don't need to store every
intermediate step in your computation (see the sample code below).
How long a time it takes depends on many different things: the
language, the precision, how good a programmer you are. Much of the
"this integrator is better than that one" arguments are implicitly
tied to certain presumptions about how your code gets implemented.
> I am
> particularly referring to solver for 1st order ODEs for general ODEs,
This is the type that I normally work with.
> quadratic problems
What do you mean by that?
> and linear constant coefficient differential
> equations.
Can you not just solve these analytically?
> (obviously non-stiff equations).
Higher-order doesn't always mean higher accuracy. The order really
refers to the order of the local truncation error. Whether the
integrator achieves the stated global truncation error depends on
certain assumptions about how the local error propagates through to
the global. This is why even very high-order methods often fail for
stiff equations if the error propagation is unstable so that small
errors increase exponentially. Similarly in chaotic systems where you
try to follow a trajectory with positive Lyapunov exponent small
errors will increase exponentially no matter what integrator you use
so that before long your solution is no better than a guess.
If you're not bothered about stiff problems and are only interested in
ultra-high accuracy for well-behaved systems then consider using a
very high-order Adams-Moulton or Adams-Bashforth integrator with a
high-precision numeric type. You can compute the coefficients for AM
or AB integrators of arbitrary order using the fractions module (see
Wikipedia for the formulas). These are more complicated to implement
as you need a secondary algorithm to bootstrap the process.
> I'm guessing you have solvers for stiff problems as well?
I've toyed with these but a proper application hasn't yet come up for
me so I haven't needed more than what scipy has. (scipy has
integrators specifically for stiff problems).
> Do you have a symplectic integrator?
No.
Here is an example that shows we can achieve the precision of scipy's
integrator (for the system you posted) easily using a standard 4th
order Runge-Kutta:
#!/usr/bin/env python
from math import exp
from pylab import loglog, plot, show, legend, xlabel, ylabel
# System specification
def f(x, t):
return 4 * exp(0.8 * t) - 0.5 * x
def analytic(t):
return (40/13.)*exp(0.8*t) - (14/13.)*exp(-0.5*t)
# This is for the analytic solution to be true
x0 = 2
def euler(f, x, t, dt, carry=0):
return x + dt * f(x, t), carry
def euler_comp(f, x, t, dt, carry=0):
dx = dt * f(x, t)
newx = x + (dx + carry)
carry = newx - (x + dx)
return newx, carry
def rk4(f, x, t, dt, carry=0):
k1 = dt * f(x, t)
k2 = dt * f(x + k1/2, t + dt/2)
k3 = dt * f(x + k2/2, t + dt/2)
k4 = dt * f(x + k3, t + dt)
dx = (k1 + 2*k2 + 2*k3 + k4) / 6
return x + dx, carry
def solveto(stepper, f, x1, t1, t2, dtmax, carry=0):
t = t1
x = x1
while t < t2:
newt = min(t + dtmax, t2)
dt = newt - t
x, carry = stepper(f, x, t, dt, carry)
t = newt
return x, carry
def solve(stepper, f, x0, ts, dtmax, carry=0):
xs = [x0]
xi = x0
for t1, t2 in zip(ts[:-1], ts[1:]):
xi, carry = solveto(stepper, f, xi, t1, t2, dtmax, carry)
xs.append(xi)
return xs
def solve_and_plot():
# Use powers of 2 for accurate binary computation
T = 4
dtout = 2 ** -5
dtmax = 2 ** -8
ts = [n * dtout for n in range(int(T // dtout) + 1)]
xeuler = solve(euler, f, x0, ts, dtmax)
xtrue = [analytic(t) for t in ts]
plot(ts, xeuler, label=r'$x_{est}$')
plot(ts, xtrue, label=r'$x_{true}$')
show()
def check_errors():
methods = [
(euler , 'euler' , 'r-+'),
(rk4 , 'rk4' , 'b-+'),
]
T = 4
xtrue = analytic(T)
dts = [2**-n for n in range(2, 18)]
for stepper, name, fmt in methods:
errors = []
for dtmax in dts:
xest, carry = solveto(stepper, f, x0, 0, T, dtmax)
error = abs((xest - xtrue) / xtrue)
errors.append(error)
loglog(dts, errors, fmt, label=name)
legend()
xlabel(r'$\Delta t$')
ylabel(r'$E_{rel}$', rotation=0)
show()
check_errors()
You'll see that the error gets smaller as we reduce the step-size
until about dt=1e-3 at which point the error is about 1e-14 (as was
the case for scipy's odeint). We can't go smaller with the error
because the truncation error is already smaller than the rounding
error. Using compensated summation in rk4 wouldn't help because the
real problem is the rounding error in f which comes from the fact that
0.8 cannot be exactly represented in binary floating point and the
exp() function needs to round its output. If we can't compute the
derivative exactly then we can't do anything about the rounding error.
We can do better by using the higher precision Decimal type from
Python's decimal module. The output from the script below is:
$ py -3.3 ./oded.py
dt: 1.000000e-02 error: 1.338e-11
dt: 1.000000e-03 error: 1.337e-15
dt: 1.000000e-04 error: 1.337e-19
dt: 1.000000e-05 error: 1.339e-23
We can probably get the error down to say 1e-27 by using a step-size
of 1e-6. However it would take an hour or two to run that on my
computer (there's probably not much scope for optimisation since
profiling identifies the exp() function as the culprit).
#!/usr/bin/env python
from decimal import Decimal as D
# We don't have transcendental functions for Decimals.
_E = +D('2.71828182845904523536028747135266249775724709369995')
def exp(y):
return _E ** y
# System specification
def f(x, t):
return 4 * exp(D('0.8') * t) - D('0.5') * x
def analytic(t):
return (D('40')/D('13'))*exp(D('0.8')*t) -
(D('14')/D('13'))*exp(-D('0.5')*t)
# This is for the analytic solution to be true
x0 = 2
def rk4(f, x, t, dt, carry=0):
k1 = dt * f(x, t)
k2 = dt * f(x + k1/2, t + dt/2)
k3 = dt * f(x + k2/2, t + dt/2)
k4 = dt * f(x + k3, t + dt)
dx = (k1 + 2*k2 + 2*k3 + k4) / 6
return x + dx, carry
def solveto(stepper, f, x1, t1, t2, dtmax, carry=0):
t = t1
x = x1
while t < t2:
newt = min(t + dtmax, t2)
dt = newt - t
x, carry = stepper(f, x, t, dt, carry)
t = newt
return x, carry
T = 4
xtrue = analytic(T)
# It's good to use powers of 10 when working in decimal
dts = [D('10')**-n for n in range(2, 6)]
for dt in dts:
xest, carry = solveto(rk4, f, x0, 0, T, dt)
error = abs((xest - xtrue) / xtrue)
print('dt: %e error: %.3e' % (dt, error))
Oscar
More information about the Tutor
mailing list