[Scipy-svn] r6226 - trunk/scipy/optimize

scipy-svn at scipy.org scipy-svn at scipy.org
Wed Feb 10 02:42:25 EST 2010


Author: stefan
Date: 2010-02-10 01:42:25 -0600 (Wed, 10 Feb 2010)
New Revision: 6226

Modified:
   trunk/scipy/optimize/tnc.py
Log:
DOC: Reformat TNC docstring.

Modified: trunk/scipy/optimize/tnc.py
===================================================================
--- trunk/scipy/optimize/tnc.py	2010-02-10 07:41:40 UTC (rev 6225)
+++ trunk/scipy/optimize/tnc.py	2010-02-10 07:42:25 UTC (rev 6226)
@@ -86,106 +86,86 @@
     """Minimize a function with variables subject to bounds, using
     gradient information.
 
-    :Parameters:
-        func : callable func(x, *args)
-            Function to minimize.  Should return f and g, where f is
-            the value of the function and g its gradient (a list of
-            floats).  If the function returns None, the minimization
-            is aborted.
-        x0 : list of floats
-            Initial estimate of minimum.
-        fprime : callable fprime(x, *args)
-            Gradient of func. If None, then func must return the
-            function value and the gradient (f,g = func(x, *args)).
-        args : tuple
-            Arguments to pass to function.
-        approx_grad : bool
-            If true, approximate the gradient numerically.
-        bounds : list
-            (min, max) pairs for each element in x, defining the
-            bounds on that parameter. Use None or +/-inf for one of
-            min or max when there is no bound in that direction.
-        scale : list of floats
-            Scaling factors to apply to each variable.  If None, the
-            factors are up-low for interval bounded variables and
-            1+|x] fo the others.  Defaults to None
-        offset : float
-            Value to substract from each variable.  If None, the
-            offsets are (up+low)/2 for interval bounded variables
-            and x for the others.
-        messages :
-            Bit mask used to select messages display during
-            minimization values defined in the MSGS dict.  Defaults to
-            MGS_ALL.
-        maxCGit : int
-            Maximum number of hessian*vector evaluations per main
-            iteration.  If maxCGit == 0, the direction chosen is
-            -gradient if maxCGit < 0, maxCGit is set to
-            max(1,min(50,n/2)).  Defaults to -1.
-        maxfun : int
-            Maximum number of function evaluation.  if None, maxfun is
-            set to max(100, 10*len(x0)).  Defaults to None.
-        eta : float
-            Severity of the line search. if < 0 or > 1, set to 0.25.
-            Defaults to -1.
-        stepmx : float
-            Maximum step for the line search.  May be increased during
-            call.  If too small, it will be set to 10.0.  Defaults to 0.
-        accuracy : float
-            Relative precision for finite difference calculations.  If
-            <= machine_precision, set to sqrt(machine_precision).
-            Defaults to 0.
-        fmin : float
-            Minimum function value estimate.  Defaults to 0.
-        ftol : float
-            Precision goal for the value of f in the stoping criterion.
-            If ftol < 0.0, ftol is set to 0.0 defaults to -1.
-        xtol : float
-            Precision goal for the value of x in the stopping
-            criterion (after applying x scaling factors).  If xtol <
-            0.0, xtol is set to sqrt(machine_precision).  Defaults to
-            -1.
-        pgtol : float
-            Precision goal for the value of the projected gradient in
-            the stopping criterion (after applying x scaling factors).
-            If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy).
-            Setting it to 0.0 is not recommended.  Defaults to -1.
-        rescale : float
-            Scaling factor (in log10) used to trigger f value
-            rescaling.  If 0, rescale at each iteration.  If a large
-            value, never rescale.  If < 0, rescale is set to 1.3.
+    Parameters
+    ----------
+    func : callable func(x, *args)
+        Function to minimize.  Should return f and g, where f is
+        the value of the function and g its gradient (a list of
+        floats).  If the function returns None, the minimization
+        is aborted.
+    x0 : list of floats
+        Initial estimate of minimum.
+    fprime : callable fprime(x, *args)
+        Gradient of func. If None, then func must return the
+        function value and the gradient (f,g = func(x, *args)).
+    args : tuple
+        Arguments to pass to function.
+    approx_grad : bool
+        If true, approximate the gradient numerically.
+    bounds : list
+        (min, max) pairs for each element in x, defining the
+        bounds on that parameter. Use None or +/-inf for one of
+        min or max when there is no bound in that direction.
+    scale : list of floats
+        Scaling factors to apply to each variable.  If None, the
+        factors are up-low for interval bounded variables and
+        1+|x] fo the others.  Defaults to None
+    offset : float
+        Value to substract from each variable.  If None, the
+        offsets are (up+low)/2 for interval bounded variables
+        and x for the others.
+    messages :
+        Bit mask used to select messages display during
+        minimization values defined in the MSGS dict.  Defaults to
+        MGS_ALL.
+    maxCGit : int
+        Maximum number of hessian*vector evaluations per main
+        iteration.  If maxCGit == 0, the direction chosen is
+        -gradient if maxCGit < 0, maxCGit is set to
+        max(1,min(50,n/2)).  Defaults to -1.
+    maxfun : int
+        Maximum number of function evaluation.  if None, maxfun is
+        set to max(100, 10*len(x0)).  Defaults to None.
+    eta : float
+        Severity of the line search. if < 0 or > 1, set to 0.25.
+        Defaults to -1.
+    stepmx : float
+        Maximum step for the line search.  May be increased during
+        call.  If too small, it will be set to 10.0.  Defaults to 0.
+    accuracy : float
+        Relative precision for finite difference calculations.  If
+        <= machine_precision, set to sqrt(machine_precision).
+        Defaults to 0.
+    fmin : float
+        Minimum function value estimate.  Defaults to 0.
+    ftol : float
+        Precision goal for the value of f in the stoping criterion.
+        If ftol < 0.0, ftol is set to 0.0 defaults to -1.
+    xtol : float
+        Precision goal for the value of x in the stopping
+        criterion (after applying x scaling factors).  If xtol <
+        0.0, xtol is set to sqrt(machine_precision).  Defaults to
+        -1.
+    pgtol : float
+        Precision goal for the value of the projected gradient in
+        the stopping criterion (after applying x scaling factors).
+        If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy).
+        Setting it to 0.0 is not recommended.  Defaults to -1.
+    rescale : float
+        Scaling factor (in log10) used to trigger f value
+        rescaling.  If 0, rescale at each iteration.  If a large
+        value, never rescale.  If < 0, rescale is set to 1.3.
 
-    :Returns:
-        x : list of floats
-            The solution.
-        nfeval : int
-            The number of function evaluations.
-        rc :
-            Return code as defined in the RCSTRINGS dict.
+    Returns
+    -------
+    x : list of floats
+        The solution.
+    nfeval : int
+        The number of function evaluations.
+    rc :
+        Return code as defined in the RCSTRINGS dict.
 
-    :SeeAlso:
-      - fmin, fmin_powell, fmin_cg, fmin_bfgs, fmin_ncg :
-         multivariate local optimizers
-
-      - leastsq : nonlinear least squares minimizer
-
-      - fmin_l_bfgs_b, fmin_tnc, fmin_cobyla : constrained
-        multivariate optimizers
-
-      - anneal, brute : global optimizers
-
-      - fminbound, brent, golden, bracket : local scalar minimizers
-
-      - fsolve : n-dimensional root-finding
-
-      - brentq, brenth, ridder, bisect, newton : one-dimensional root-finding
-
-      - fixed_point : scalar fixed-point finder
-
-      - OpenOpt : a tool which offers a unified syntax to call this and 
-         other solvers with possibility of automatic differentiation.
-
-"""
+    """
     x0 = asarray(x0, dtype=float).tolist()
     n = len(x0)
 




More information about the Scipy-svn mailing list