fmin_cg#
- scipy.optimize.fmin_cg(f, x0, fprime=None, args=(), gtol=1e-05, norm=inf, epsilon=np.float64(1.4901161193847656e-08), maxiter=None, full_output=0, disp=1, retall=0, callback=None, c1=0.0001, c2=0.4)[source]#
- Minimize a function using a nonlinear conjugate gradient algorithm. - Parameters:
- fcallable, f(x, *args)
- Objective function to be minimized. Here x must be a 1-D array of the variables that are to be changed in the search for a minimum, and args are the other (fixed) parameters of f. 
- x0ndarray
- A user-supplied initial estimate of xopt, the optimal value of x. It must be a 1-D array of values. 
- fprimecallable, fprime(x, *args), optional
- A function that returns the gradient of f at x. Here x and args are as described above for f. The returned value must be a 1-D array. Defaults to None, in which case the gradient is approximated numerically (see epsilon, below). 
- argstuple, optional
- Parameter values passed to f and fprime. Must be supplied whenever additional fixed parameters are needed to completely specify the functions f and fprime. 
- gtolfloat, optional
- Stop when the norm of the gradient is less than gtol. 
- normfloat, optional
- Order to use for the norm of the gradient ( - -np.infis min,- np.infis max).
- epsilonfloat or ndarray, optional
- Step size(s) to use when fprime is approximated numerically. Can be a scalar or a 1-D array. Defaults to - sqrt(eps), with eps the floating point machine precision. Usually- sqrt(eps)is about 1.5e-8.
- maxiterint, optional
- Maximum number of iterations to perform. Default is - 200 * len(x0).
- full_outputbool, optional
- If True, return fopt, func_calls, grad_calls, and warnflag in addition to xopt. See the Returns section below for additional information on optional return values. 
- dispbool, optional
- If True, return a convergence message, followed by xopt. 
- retallbool, optional
- If True, add to the returned values the results of each iteration. 
- callbackcallable, optional
- An optional user-supplied function, called after each iteration. Called as - callback(xk), where- xkis the current value of x0.
- c1float, default: 1e-4
- Parameter for Armijo condition rule. 
- c2float, default: 0.4
- Parameter for curvature condition rule. 
 
- fcallable, 
- Returns:
- xoptndarray
- Parameters which minimize f, i.e., - f(xopt) == fopt.
- foptfloat, optional
- Minimum value found, f(xopt). Only returned if full_output is True. 
- func_callsint, optional
- The number of function_calls made. Only returned if full_output is True. 
- grad_callsint, optional
- The number of gradient calls made. Only returned if full_output is True. 
- warnflagint, optional
- Integer value with warning status, only returned if full_output is True. - 0 : Success. - 1 : The maximum number of iterations was exceeded. - 2Gradient and/or function calls were not changing. May indicate
- that precision was lost, i.e., the routine did not converge. 
 - 3 : NaN result encountered. 
- allvecslist of ndarray, optional
- List of arrays, containing the results at each iteration. Only returned if retall is True. 
 
 - See also - minimize
- common interface to all - scipy.optimizealgorithms for unconstrained and constrained minimization of multivariate functions. It provides an alternative way to call- fmin_cg, by specifying- method='CG'.
 - Notes - This conjugate gradient algorithm is based on that of Polak and Ribiere [1]. - Conjugate gradient methods tend to work better when: - f has a unique global minimizing point, and no local minima or other stationary points, 
- f is, at least locally, reasonably well approximated by a quadratic function of the variables, 
- f is continuous and has a continuous gradient, 
- fprime is not too large, e.g., has a norm less than 1000, 
- The initial guess, x0, is reasonably close to f ‘s global minimizing point, xopt. 
 - Parameters c1 and c2 must satisfy - 0 < c1 < c2 < 1.- References [1]- Wright & Nocedal, “Numerical Optimization”, 1999, pp. 120-122. - Examples - Example 1: seek the minimum value of the expression - a*u**2 + b*u*v + c*v**2 + d*u + e*v + ffor given values of the parameters and an initial guess- (u, v) = (0, 0).- >>> import numpy as np >>> args = (2, 3, 7, 8, 9, 10) # parameter values >>> def f(x, *args): ... u, v = x ... a, b, c, d, e, f = args ... return a*u**2 + b*u*v + c*v**2 + d*u + e*v + f >>> def gradf(x, *args): ... u, v = x ... a, b, c, d, e, f = args ... gu = 2*a*u + b*v + d # u-component of the gradient ... gv = b*u + 2*c*v + e # v-component of the gradient ... return np.asarray((gu, gv)) >>> x0 = np.asarray((0, 0)) # Initial guess. >>> from scipy import optimize >>> res1 = optimize.fmin_cg(f, x0, fprime=gradf, args=args) Optimization terminated successfully. Current function value: 1.617021 Iterations: 4 Function evaluations: 8 Gradient evaluations: 8 >>> res1 array([-1.80851064, -0.25531915]) - Example 2: solve the same problem using the - minimizefunction. (This myopts dictionary shows all of the available options, although in practice only non-default values would be needed. The returned value will be a dictionary.)- >>> opts = {'maxiter' : None, # default value. ... 'disp' : True, # non-default value. ... 'gtol' : 1e-5, # default value. ... 'norm' : np.inf, # default value. ... 'eps' : 1.4901161193847656e-08} # default value. >>> res2 = optimize.minimize(f, x0, jac=gradf, args=args, ... method='CG', options=opts) Optimization terminated successfully. Current function value: 1.617021 Iterations: 4 Function evaluations: 8 Gradient evaluations: 8 >>> res2.x # minimum found array([-1.80851064, -0.25531915])