Numeric root-finding in Python

Eelco hoogendoorn.eelco at gmail.com
Sun Feb 12 05:10:49 EST 2012


On Feb 12, 7:41 am, Steven D'Aprano <steve
+comp.lang.pyt... at pearwood.info> wrote:
> This is only peripherally a Python problem, but in case anyone has any
> good ideas I'm going to ask it.
>
> I have a routine to calculate an approximation of Lambert's W function,
> and then apply a root-finding technique to improve the approximation.
> This mostly works well, but sometimes the root-finder gets stuck in a
> cycle.
>
> Here's my function:
>
> import math
> def improve(x, w, exp=math.exp):
>     """Use Halley's method to improve an estimate of W(x) given
>     an initial estimate w.
>     """
>     try:
>         for i in range(36):  # Max number of iterations.
>             ew = exp(w)
>             a = w*ew - x
>             b = ew*(w + 1)
>             err = -a/b  # Estimate of the error in the current w.
>             if abs(err) <= 1e-16:
>                 break
>             print '%d: w= %r err= %r' % (i, w, err)
>             # Make a better estimate.
>             c = (w + 2)*a/(2*w + 2)
>             delta = a/(b - c)
>             w -= delta
>         else:
>             raise RuntimeError('calculation failed to converge', err)
>     except ZeroDivisionError:
>         assert w == -1
>     return w
>
> Here's an example where improve() converges very quickly:
>
> py> improve(-0.36, -1.222769842388856)
> 0: w= -1.222769842388856 err= -2.9158979924038895e-07
> 1: w= -1.2227701339785069 err= 8.4638038491998997e-16
> -1.222770133978506
>
> That's what I expect: convergence in only a few iterations.
>
> Here's an example where it gets stuck in a cycle, bouncing back and forth
> between two values:
>
> py> improve(-0.36787344117144249, -1.0057222396915309)
> 0: w= -1.0057222396915309 err= 2.6521238905750239e-14
> 1: w= -1.0057222396915044 err= -2.6521238905872001e-14
> 2: w= -1.0057222396915309 err= 2.6521238905750239e-14
> 3: w= -1.0057222396915044 err= -2.6521238905872001e-14
> 4: w= -1.0057222396915309 err= 2.6521238905750239e-14
> 5: w= -1.0057222396915044 err= -2.6521238905872001e-14
> [...]
> 32: w= -1.0057222396915309 err= 2.6521238905750239e-14
> 33: w= -1.0057222396915044 err= -2.6521238905872001e-14
> 34: w= -1.0057222396915309 err= 2.6521238905750239e-14
> 35: w= -1.0057222396915044 err= -2.6521238905872001e-14
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "<stdin>", line 19, in improve
> RuntimeError: ('calculation failed to converge', -2.6521238905872001e-14)
>
> (The correct value for w is approximately -1.00572223991.)
>
> I know that Newton's method is subject to cycles, but I haven't found any
> discussion about Halley's method and cycles, nor do I know what the best
> approach for breaking them would be. None of the papers on calculating
> the Lambert W function that I have found mentions this.
>
> Does anyone have any advice for solving this?
>
> --
> Steven

Looks like floating point issues to me, rather than something
intrinsic to the iterative algorithm. Surely there is not complex
chaotic behavior to be found in this fairly smooth function in a +/-
1e-14 window. Otoh, there is a lot of floating point significant bit
loss issues to be suspected in the kind of operations you are
performing (exp(x) + something, always a tricky one).

I would start by asking: How accurate is good enough? If its not good
enough, play around the the ordering of your operations, try solving a
transformed problem less sensitive to loss of significance; and begin
by trying different numeric types to see if the problem is sensitive
thereto to begin with.



More information about the Python-list mailing list