fmin#
- scipy.optimize.fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, initial_simplex=None)[source]#
- Minimize a function using the downhill simplex algorithm. - This algorithm only uses function values, not derivatives or second derivatives. - Parameters:
- funccallable func(x,*args)
- The objective function to be minimized. 
- x0ndarray
- Initial guess. 
- argstuple, optional
- Extra arguments passed to func, i.e., - f(x,*args).
- xtolfloat, optional
- Absolute error in xopt between iterations that is acceptable for convergence. 
- ftolnumber, optional
- Absolute error in func(xopt) between iterations that is acceptable for convergence. 
- maxiterint, optional
- Maximum number of iterations to perform. 
- maxfunnumber, optional
- Maximum number of function evaluations to make. 
- full_outputbool, optional
- Set to True if fopt and warnflag outputs are desired. 
- dispbool, optional
- Set to True to print convergence messages. 
- retallbool, optional
- Set to True to return list of solutions at each iteration. 
- callbackcallable, optional
- Called after each iteration, as callback(xk), where xk is the current parameter vector. 
- initial_simplexarray_like of shape (N + 1, N), optional
- Initial simplex. If given, overrides x0. - initial_simplex[j,:]should contain the coordinates of the jth vertex of the- N+1vertices in the simplex, where- Nis the dimension.
 
- Returns:
- xoptndarray
- Parameter that minimizes function. 
- foptfloat
- Value of function at minimum: - fopt = func(xopt).
- iterint
- Number of iterations performed. 
- funcallsint
- Number of function calls made. 
- warnflagint
- 1 : Maximum number of function evaluations made. 2 : Maximum number of iterations reached. 
- allvecslist
- Solution at each iteration. 
 
 - See also - minimize
- Interface to minimization algorithms for multivariate functions. See the ‘Nelder-Mead’ method in particular. 
 - Notes - Uses a Nelder-Mead simplex algorithm to find the minimum of function of one or more variables. - This algorithm has a long history of successful use in applications. But it will usually be slower than an algorithm that uses first or second derivative information. In practice, it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does. Both the ftol and xtol criteria must be met for convergence. - References [1]- Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimization”, The Computer Journal, 7, pp. 308-313 [2]- Wright, M.H. (1996), “Direct Search Methods: Once Scorned, Now Respectable”, in Numerical Analysis 1995, Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis, D.F. Griffiths and G.A. Watson (Eds.), Addison Wesley Longman, Harlow, UK, pp. 191-208. - Examples - >>> def f(x): ... return x**2 - >>> from scipy import optimize - >>> minimum = optimize.fmin(f, 1) Optimization terminated successfully. Current function value: 0.000000 Iterations: 17 Function evaluations: 34 >>> minimum[0] -8.8817841970012523e-16