Graph.gradient(tensor, variables, *, name=None)

Calculates a single gradient vector for all the variables.

The gradient is a vector containing all the first partial derivatives of the tensor with respect to the variables.

  • tensor (Tensor(scalar, real)) – The real scalar tensor \(T\) whose gradient vector you want to calculate.

  • variables (list[Tensor(real)]) – The list of real variables \(\{\theta_i\}\) with respect to which you want to take the first partial derivatives of the tensor. If any of the tensors of the list is not scalar, this function treats each of the elements of the tensor as a different variable. It does this by flattening all tensors and concatenating them in the same sequence that you provided in this list.

  • name (str, optional) – The name of the node.


The gradient vector \(\nabla T\) containing the first partial derivatives of the tensor \(T\) with respect to the variables \(\{\theta_i\}\).

Return type:

Tensor(1D, real)


This function currently doesn’t support calculating a gradient vector for a graph which includes an infidelity_pwc node if it involves a Hamiltonian with degenerate eigenvalues at any segment. In that case, the function returns a nan gradient vector.


The \(i\) element of the gradient contains the partial derivative of the tensor with respect to the ith variables:

\[(\nabla T)_{i} = \frac{\partial T}{\partial \theta_i}.\]

The variables \(\{\theta_i\}\) follow the same sequence as the input list of variables. If some of the variables are not scalars, this function flattens them and concatenates them in the same order of the list of variables that you provided to create the sequence of scalar variables \(\{\theta_i\}\).