jacobian#
- scipy.differentiate.jacobian(f, x, *, tolerances=None, maxiter=10, order=8, initial_step=0.5, step_factor=2.0, step_direction=0)[source]#
- Evaluate the Jacobian of a function numerically. - Parameters:
- fcallable
- The function whose Jacobian is desired. The signature must be: - f(xi: ndarray) -> ndarray - where each element of - xiis a finite real. If the function to be differentiated accepts additional arguments, wrap it (e.g. using- functools.partialor- lambda) and pass the wrapped callable into- jacobian. f must not mutate the array- xi. See Notes regarding vectorization and the dimensionality of the input and output.
- xfloat array_like
- Points at which to evaluate the Jacobian. Must have at least one dimension. See Notes regarding the dimensionality and vectorization. 
- tolerancesdictionary of floats, optional
- Absolute and relative tolerances. Valid keys of the dictionary are: - atol- absolute tolerance on the derivative
- rtol- relative tolerance on the derivative
 - Iteration will stop when - res.error < atol + rtol * abs(res.df). The default atol is the smallest normal number of the appropriate dtype, and the default rtol is the square root of the precision of the appropriate dtype.
- maxiterint, default: 10
- The maximum number of iterations of the algorithm to perform. See Notes. 
- orderint, default: 8
- The (positive integer) order of the finite difference formula to be used. Odd integers will be rounded up to the next even integer. 
- initial_stepfloat array_like, default: 0.5
- The (absolute) initial step size for the finite difference derivative approximation. Must be broadcastable with x and step_direction. 
- step_factorfloat, default: 2.0
- The factor by which the step size is reduced in each iteration; i.e. the step size in iteration 1 is - initial_step/step_factor. If- step_factor < 1, subsequent steps will be greater than the initial step; this may be useful if steps smaller than some threshold are undesirable (e.g. due to subtractive cancellation error).
- step_directioninteger array_like
- An array representing the direction of the finite difference steps (e.g. for use when x lies near to the boundary of the domain of the function.) Must be broadcastable with x and initial_step. Where 0 (default), central differences are used; where negative (e.g. -1), steps are non-positive; and where positive (e.g. 1), all steps are non-negative. 
 
- Returns:
- res_RichResult
- An object similar to an instance of - scipy.optimize.OptimizeResultwith the following attributes. The descriptions are written as though the values will be scalars; however, if f returns an array, the outputs will be arrays of the same shape.- successbool array
- Truewhere the algorithm terminated successfully (status- 0);- Falseotherwise.
- statusint array
- An integer representing the exit status of the algorithm. - 0: The algorithm converged to the specified tolerances.
- -1: The error estimate increased, so iteration was terminated.
- -2: The maximum number of iterations was reached.
- -3: A non-finite value was encountered.
 
- dffloat array
- The Jacobian of f at x, if the algorithm terminated successfully. 
- errorfloat array
- An estimate of the error: the magnitude of the difference between the current estimate of the Jacobian and the estimate in the previous iteration. 
- nitint array
- The number of iterations of the algorithm that were performed. 
- nfevint array
- The number of points at which f was evaluated. 
 - Each element of an attribute is associated with the corresponding element of df. For instance, element - iof nfev is the number of points at which f was evaluated for the sake of computing element- iof df.
 
 - See also - Notes - Suppose we wish to evaluate the Jacobian of a function \(f: \mathbf{R}^m \rightarrow \mathbf{R}^n\). Assign to variables - mand- nthe positive integer values of \(m\) and \(n\), respectively, and let- ...represent an arbitrary tuple of integers. If we wish to evaluate the Jacobian at a single point, then:- argument x must be an array of shape - (m,)
- argument f must be vectorized to accept an array of shape - (m, ...). The first axis represents the \(m\) inputs of \(f\); the remainder are for evaluating the function at multiple points in a single call.
- argument f must return an array of shape - (n, ...). The first axis represents the \(n\) outputs of \(f\); the remainder are for the result of evaluating the function at multiple points.
- attribute - dfof the result object will be an array of shape- (n, m), the Jacobian.
 - This function is also vectorized in the sense that the Jacobian can be evaluated at - kpoints in a single call. In this case, x would be an array of shape- (m, k), f would accept an array of shape- (m, k, ...)and return an array of shape- (n, k, ...), and the- dfattribute of the result would have shape- (n, m, k).- Suppose the desired callable - f_not_vectorizedis not vectorized; it can only accept an array of shape- (m,). A simple solution to satisfy the required interface is to wrap- f_not_vectorizedas follows:- def f(x): return np.apply_along_axis(f_not_vectorized, axis=0, arr=x) - Alternatively, suppose the desired callable - f_vec_qis vectorized, but only for 2-D arrays of shape- (m, q). To satisfy the required interface, consider:- def f(x): m, batch = x.shape[0], x.shape[1:] # x.shape is (m, ...) x = np.reshape(x, (m, -1)) # `-1` is short for q = prod(batch) res = f_vec_q(x) # pass shape (m, q) to function n = res.shape[0] return np.reshape(res, (n,) + batch) # return shape (n, ...) - Then pass the wrapped callable - fas the first argument of- jacobian.- References [1]- Jacobian matrix and determinant, Wikipedia, https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant - Examples - The Rosenbrock function maps from \(\mathbf{R}^m \rightarrow \mathbf{R}\); the SciPy implementation - scipy.optimize.rosenis vectorized to accept an array of shape- (m, p)and return an array of shape- p. Suppose we wish to evaluate the Jacobian (AKA the gradient because the function returns a scalar) at- [0.5, 0.5, 0.5].- >>> import numpy as np >>> from scipy.differentiate import jacobian >>> from scipy.optimize import rosen, rosen_der >>> m = 3 >>> x = np.full(m, 0.5) >>> res = jacobian(rosen, x) >>> ref = rosen_der(x) # reference value of the gradient >>> res.df, ref (array([-51., -1., 50.]), array([-51., -1., 50.])) - As an example of a function with multiple outputs, consider Example 4 from [1]. - >>> def f(x): ... x1, x2, x3 = x ... return [x1, 5*x3, 4*x2**2 - 2*x3, x3*np.sin(x1)] - The true Jacobian is given by: - >>> def df(x): ... x1, x2, x3 = x ... one = np.ones_like(x1) ... return [[one, 0*one, 0*one], ... [0*one, 0*one, 5*one], ... [0*one, 8*x2, -2*one], ... [x3*np.cos(x1), 0*one, np.sin(x1)]] - Evaluate the Jacobian at an arbitrary point. - >>> rng = np.random.default_rng() >>> x = rng.random(size=3) >>> res = jacobian(f, x) >>> ref = df(x) >>> res.df.shape == (4, 3) True >>> np.allclose(res.df, ref) True - Evaluate the Jacobian at 10 arbitrary points in a single call. - >>> x = rng.random(size=(3, 10)) >>> res = jacobian(f, x) >>> ref = df(x) >>> res.df.shape == (4, 3, 10) True >>> np.allclose(res.df, ref) True