[SciPy-User] Is this weird answer a bug in optimize.newton?
David Mikolas
david.mikolas1 at gmail.com
Wed Feb 24 20:15:21 EST 2016
Ah - I see what happened - 365 days with zero votes. Can be accessed and
reopened if signed in
http://stackoverflow.com/questions/27792180/brentq-gives-wrong-results-and-reports-converged-if-limits-contain-a-nan-how
On Thu, Feb 25, 2016 at 9:05 AM, David Mikolas <david.mikolas1 at gmail.com>
wrote:
> I use scipy.optimize/brentq frequently and so far have had no problems...
> except:
>
> if one of the limits is np.nan (which can happen without you necessarily
> knowing about it) then there is trouble.
>
> I'm having difficulty finding my report. I wrote this in Stackoverfow but
> for some reason it seems to be missing. Here is a residual copy on a
> "mirror"
>
>
> http://w3foverflow.com/question/brentq-gives-wrong-results-and-reports-converged-if-limits-contain-a-nan-how-to-best-report/
>
> Briefly - it sometimes returns an incorrect/invalid answer but indicates
> convergence=True. It can also sometimes take time running to the iteration
> limit before giving an incorrect/inavlid answer.
>
> With the first limit set to np.nan, it returns a wrong answer (0.0), but
> indicates convergence:
>
> With the second limit set to np.nan, and disp = False, it runs to the
> iteration limit, then returns nan as the answer and indicates convergence:
>
> With the second limit set to np.nan, and disp = True, it runs to the
> iteration limit, then raises:
>
>
> On Thu, Feb 25, 2016 at 8:29 AM, Jason Sachs <jmsachs at gmail.com> wrote:
>
>> Brent's method (scipy.optimize.brentq) is probably the best general
>> scalar (as opposed to multidimensional) root-finding algorithm, it's
>> the scipy equivalent of MATLAB's fzero.
>>
>> Worth a read of Numerical Recipes to understand the different
>> root-finding algorithms. The previous (2nd) edition is readable for
>> free online: http://apps.nrbook.com/c/index.html and root-finding
>> starts on p. 347.
>>
>> Chandrupatla's algorithm appears to be "better" than Brent's in the
>> sense that both are robust and Chandrupatla's usually converges faster
>> by a factor of 2-3x (and if not then follows Brent's within an extra
>> iteration or two) -- but it's not well-known. I have a writeup here:
>> http://www.embeddedrelated.com/showarticle/855.php
>>
>> On Wed, Feb 24, 2016 at 5:21 PM, David Mikolas <david.mikolas1 at gmail.com>
>> wrote:
>> > First of all it sounds like your students are receiving a very good
>> > educational experience!
>> >
>> > typing help(optimize.newton) returns the following:
>> >
>> > fprime : function, optional
>> > The derivative of the function when available and convenient.
>> If it
>> > is None (default), then the secant method is used.
>> >
>> > A look in https://en.wikipedia.org/wiki/Secant_method#Convergence
>> returns
>> > the following:
>> >
>> > "If the initial values are not close enough to the root, then there is
>> no
>> > guarantee that the secant method converges."
>> >
>> > So it looks like the behavior of scipy.optimize.newton in your example
>> is
>> > neither unexpected nor a bug.
>> >
>> >
>> >
>> > On Thu, Feb 25, 2016 at 7:36 AM, Oscar Benjamin <
>> oscar.j.benjamin at gmail.com>
>> > wrote:
>> >>
>> >> On 24 February 2016 at 19:43, Thomas Baruchel <baruchel at gmx.com>
>> wrote:
>> >> > with my students, this afternoon, I encountered the following
>> annoying
>> >> > behaviour of Scipy:
>> >> >
>> >> >>>> from scipy import optimize
>> >> >>>> optimize.newton(lambda x: x**3-x+1, 0)
>> >> >
>> >> > 0.999999989980082
>> >> >
>> >> > This is very weird, because the equation is rather simple with simple
>> >> > coefficients,
>> >> > and the initial point is simple also.
>> >>
>> >> Somehow the initial guess is problematic:
>> >>
>> >> In [1]: from scipy import optimize
>> >>
>> >> In [2]: optimize.newton(lambda x: x**3 - x + 1, 0)
>> >> Out[2]: 0.999999989980082
>> >>
>> >> In [3]: optimize.newton(lambda x: x**3 - x + 1, 0.1)
>> >> Out[3]: -1.324717957244745
>> >>
>> >> In [4]: optimize.newton(lambda x: x**3 - x + 1, -0.1)
>> >> Out[4]: -1.3247179572447458
>> >>
>> >> These last two are the correct answer:
>> >>
>> >> In [6]: from numpy import roots
>> >>
>> >> In [7]: roots([1, 0, -1, 1])
>> >> Out[7]:
>> >> array([-1.32471796+0.j , 0.66235898+0.56227951j,
>> >> 0.66235898-0.56227951j])
>> >>
>> >> The only thing that jumps out to me is that you've picked an initial
>> >> guess at which the function has a zero second derivative. I'm not sure
>> >> why that would cause the method to fail but Newton solvers are
>> >> sensitive to various singularities in the input function.
>> >>
>> >> --
>> >> Oscar
>> >> _______________________________________________
>> >> SciPy-User mailing list
>> >> SciPy-User at scipy.org
>> >> https://mail.scipy.org/mailman/listinfo/scipy-user
>> >
>> >
>> >
>> > _______________________________________________
>> > SciPy-User mailing list
>> > SciPy-User at scipy.org
>> > https://mail.scipy.org/mailman/listinfo/scipy-user
>> >
>> _______________________________________________
>> SciPy-User mailing list
>> SciPy-User at scipy.org
>> https://mail.scipy.org/mailman/listinfo/scipy-user
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20160225/72b919a4/attachment.html>
More information about the SciPy-User
mailing list