[SciPy-Dev] Comments on optimize.newton function
Gökhan Sever
gokhansever at gmail.com
Sun May 22 01:55:02 EDT 2011
On Sat, May 21, 2011 at 7:50 PM, Charles R Harris
<charlesr.harris at gmail.com> wrote:
> Yeah, that left me thinking that we could really use a bracketing function.
> The brent optimizer must have something like that it uses to bracket
> minimums and perhaps that could be adapted. I know there are bracketing
> methods out there, IIRC there is one in NR.
I have managed to get those bracketed solvers working after plotting
my function. brenth seems converging the fastest but still providing
the search interval is a problem for me. I am probably skipping these
solvers.
>
> IIRC, the old version blew up when the root was at zero, the problem was
> posted on the list.
>
> ? I believe there is only one newton method in scipy, but we moved it into
> the zeros module and deprecated the version at the old location. It has
> since been removed.
Yes, you are right. At the current source repository, there is only
one newton in scipy.optimize which resides in zeros.py
>
> I would just stick to the secant method in the general case unless the
> derivative is easy to come by. Note that newton is one of the 'original'
> functions in scipy, the other 1d zero finders came later. So if you can make
> an improved cythonized version I don't see any reason not to use it. If you
> do make cythonized versions it might be worth implementing the Newton and
> secant parts separately and make the current newton a driver function.
I have figured out the derivative option and tested fsolve and newton
with fprime arg provided. Still slower comparing to the secant method.
Most likely, that the derivative function requires a bit calculation
to be evaluated. As you can see, the function is:
cpdef double petters_solve_for_rw(double x, double rd, double rh):
return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa))
but the derivative is quite complex comparing to the function:
cpdef double pprime(double x, double rd, double rh):
return -3*(rd**3 - x**3)*x**2*exp(kelvin/x)/((kappa - 1.0)*rd**3
+x**3)**2 - 3*x**2.*exp(kelvin/x)/((kappa - 1.0)*rd**3 + x**3)-\
(rd**3 - x**3)*kelvin*exp(kelvin/x)/(((kappa - 1.0)*rd**3 + x**3)*x**2)
Skipping the fprime option, I focus on the secant method which works
the fastest and quite robust for my case. Below you can see the latest
version of the cythonized secant (probably I should update the name)
that I use:
cpdef double cnewton(func, double x0, args=(), double tol=1e-10, int
maxiter=50):
# Secant method
p0 = x0
p1 = x0*(1 + 1e-10) + 1e-10
#p1 = x0*(1+1e-4)
q0 = func(*((p0,) + args))
q1 = func(*((p1,) + args))
for iter in range(maxiter):
p = p1 - q1*(p1 - p0)/(q1 - q0)
if abs(p - p1) < tol:
return p
p0 = p1
q0 = q1
p1 = p
q1 = func(*((p1,) + args))
I simplified this block
if x0 >= 0:
p1 = x0*(1 + 1e-4) + 1e-4
else:
p1 = x0*(1 + 1e-4) - 1e-4
as just, one line since I am not interested with negative init point.
In other words, reals drops can never be less than < 0 meters.
p1 = x0*(1 + 1e-10) + 1e-10
#p1 = x0*(1+1e-4)
I also skipped this block:
if q1 == q0:
if p1 != p0:
msg = "Tolerance of %s reached" % (p1 - p0)
warnings.warn(msg, RuntimeWarning)
return (p1 + p0)/2.0
because, this part is almost never executed.
>
> How many function evaluations are you seeing in the root finding?
>
With this version of the newton, I see about 9-10 function evaluations
to converge to a meaningful and reasonable root.
> Chuck
More information about the SciPy-Dev
mailing list