[Tutor] hi

Oscar Benjamin oscar.j.benjamin at gmail.com
Mon Aug 12 14:08:06 CEST 2013

On 11 August 2013 06:09, Vick <vick1975 at orange.mu> wrote:
>> From: Oscar Benjamin [mailto:oscar.j.benjamin at gmail.com]
>> How long a time it takes depends on many different things: the language,
> the
>> precision, how good a programmer you are. Much of the "this integrator is
>> better than that one" arguments are implicitly tied to certain
> presumptions
>> about how your code gets implemented.
> [Vick] Well, if you coded the methods in the same format, structure and
> language, surely the higher order method will produce better accuracy for
> the same ODE problem and for the same amount of time than the lower order.

Maybe. It still depends. If you use a higher-order method where it's
just not needed you can end up wasting a lot of CPU-cycles. The same
goes for using multi-precision arithmetic when you could happily work
in double precision.

>> Here is an example that shows we can achieve the precision of scipy's
>> integrator (for the system you posted) easily using a standard 4th order
>> Runge-Kutta:
> [Vick] Your code shows just that. I ran it and it went on for like about 40
> minutes.

Which script are you referring to? The first deliberately uses
unnecessarily small step-sizes to illustrate the limits of the rk4
algorithm in double precision. The second is slow but achieves
absurdly high levels of accuracy. It gives better accuracy than I got
with your rungkdp8 (see below) and note that it runs a lot faster in
Python 3.3 since the decimal module was made a lot faster 3.3.

> In that amount of time, the memory was taxed a lot.

I don't think you really know what you're talking about on this
subject. The algorithm works with a constant, very small, amount of
memory regardless of stepsize. In fact the rungkdp8 method uses more
memory, however, the difference is entirely insignificant for such a
low-dimensional system. I think you're confusing the fact that the way
*you* have implemented the algorithm you store every intermediate step
in the computation. My code only stores the desired output and the
current value of the intermediate results.

> For the same
> ODE problem, RK4 should produce an error of 1e-4 in a reasonable amount of
> time (about 2 secs) and utilizing normal computer memory for its
> calculations.

If you modify the check_errors() function in the first script I posted
so that it looks like this:

def check_errors():
    T = 4
    xtrue = analytic(T)
    dt = 2 ** -3
    xest, carry = solveto(rk4, f, x0, 0, T, dt)
    error = abs((xest - xtrue) / xtrue)
    print('dt : %.5e  error : %.2e' % (xest, error))

Then you can run it and get the following:

$ time ./ode.py
dt : 7.53390e+01  error : 3.28e-07

real    0m0.094s
user    0m0.030s
sys     0m0.046s

In other words it takes 90 milliseconds to run the script and get an
error less than 1e-6 (or 1e-4% if you prefer). Actually it really just
takes 90 milliseconds to *initialise* the script. We can use timeit to
check how long the actual computation takes (with the print line
commented out):

$ python -m timeit -s 'from ode import check_errors' 'check_errors()'
10000 loops, best of 3: 168 usec per loop

It takes 168 microseconds to achieve the accuracy you requested, for
your test problem, using my rk4 integrator. Scipy's odeint is
probably faster (I'll let you do the timing if you want to know but
bear in mind that it takes a significant amount of time to *import*
scipy so you should use timeit for benchmarking).

> As I mentioned before I used this problem as a test for
> accuracy to compare different methods. I use percentage change (x 100) to
> derive the error for each method.

I don't understand why you insist on defying convention by using
percentages. Really your errors should be so small that percent is a
meaningless unit of measure. Fractional errors are useful because you
can easily relate them to absolute errors without the pointless
cognitive burden of a factor of 100 getting in the way.

> RK4 in this instance scores about 1e-4
> accuracy. With this problem I've tested RKF5, DPRK45, some 6 and 7 order RK
> methods and the last one is the DOPRI8(7)13.

Are you testing all the methods with the same step-sizes? If so that's
not a fair comparison since they do different amounts of work within
each step.

> The Leapfrog method is a symplectic integrator and I am currently trying to
> code it. I have it in Excel format already.
> The following is my code for the ODE problem test for RK4 and DOPRI8(7)13:
> (Note that they are using the same language, structure and format) (you will
> also need mpmath to run: I use mpmath for large number calculations)

You don't need mpmath or DOPRI8 for the error-level you said you
wanted. Scipy's odeint will do just fine. When you're old, grey and
wise like me you'll value the time that it takes to *write* the code
much more than the time it takes to run (okay so I'm not that old and
my wisdom is questionable but I do have some grey hairs). It took me
2-3 minutes to write the script that solves your problem with scipy's

> The code should run using less than 2 secs and the output is pre-defined for
> time, y , true value and error. The error for the two methods will show the
> difference between the different orders. You should change the call to each
> method for comparison as I coded it to run one method at any one time.

[snip lots of code]

I would report a comparison of timings between this code and my
decimal rk4 but I think it would be unfair as I think there may be
something wrong with the way your's is implemented. Unless I've
misunderstood something it's not achieving the proper benefits of the
mpmath library since for some reason the error bottoms out at 1e-18. I
replaced the bottom of your script with this:

x0 = mpf('2')
T = mpf('4')

def analytic(t):
    return (mpf('4')/mpf('1.3'))*exp(mpf('0.8')*(t))-(mpf('1.4')/mpf('1.3'))*exp(mpf('-0.5')*(t))

def solveto(stepper, x1, t1, t2, dt):
    t = t1
    x = x1
    while t < t2:
        newt = min(t + dt, t2)
        dt = newt - t
        x = stepper(t, x, dt)
        t = newt
    return x

xtrue = analytic(T)
dts = [mpf('10') ** -n for n in range(0, 3)]
errors = []
for dt in dts:
    xest = solveto(rungkdp8, x0, 0, T, dt)
    error = abs((xest - xtrue) / xtrue)
    print('dt : %.1e  error : %.2g' % (dt, error))

from pylab import *
loglog(dts, errors)
ylabel(r'$E_{rel}$', rotation=0)
xlabel(r'$\Delta t$')

When I run that I get:

$ ./vick.py
dt : 1.0e+00  error : 1.1e-09
dt : 1.0e-01  error : 9.4e-18
dt : 1.0e-02  error : 2.3e-18
dt : 1.0e-03  error : 2.3e-18

Do you get the same (it could be a difference in the underlying library etc.)?


More information about the Tutor mailing list