calculate_optimization

static FunctionNamespace.calculate_optimization(*, graph, cost_node_name, output_node_names, max_iteration_count=None, target_cost=None, cost_tolerance=None, optimization_count=4, cost_history_scope='NONE', seed=None, **kwargs)

Perform gradient-based deterministic optimization of generic real-valued functions.

Use this function to determine a choice of variables that minimizes the value of a scalar real-valued cost function of those variables. You express that cost function as a graph describing how the input variables are transformed into the cost value.

Note that this function will pick a different initial guess for each optimization run. You can provide initial values (or a list of them of the same length) for the optimization variables in the graph, and the optimizer will start with those values. If the optimization_count is larger than the number of provided initial values, the optimizer will use random values after consuming all available initial values.

Parameters:
  • graph (qctrl.graphs.Graph) – The graph describing the cost \(C(\mathbf v)\) and outputs \(\{F_j(\mathbf v)\}\) as functions of the optimizable input variables \(\mathbf v\). The graph must contain nodes with names \(s\) (giving the cost function) and \(\{s_j\}\) (giving the output functions).

  • cost_node_name (str) – The name \(s\) of the real-valued scalar graph node that defines the cost function \(C(\mathbf v)\) to be minimized.

  • output_node_names (List[str]) – The names \(\{s_j\}\) of the graph nodes that define the output functions \(\{F_j(\mathbf v)\}\). The function evaluates these using the optimized variables and returns them in the output.

  • max_iteration_count (int, optional) – The maximum number of cost evaluations to perform. You can set this as an early stop condition for the optimizer. Defaults to None, which means that the optimizer runs until it converges.

  • target_cost (float, optional) – A target value of the cost that you can set as an early stop condition for the optimizer. If the cost becomes equal or smaller than this value, the optimization halts. Defaults to None, which means that the optimizer runs until it converges.

  • cost_tolerance (float, optional) – The tolerance \(\mathrm{tol}\) to check whether the cost function has converged. You can set this as an early stop condition for the optimizer. The optimizer stops when the improvement of the cost function over the iterations is below the tolerance. That is, it stops when \((C_j - C_{j+1}) / \max(|C_j|, |C_{j+1}|, 1) <= \mathrm{tol}\). Defaults to None, meaning the optimizer handles the tolerance automatically. You can try different values depending on your optimization problem. A recommended starting value is about 1e-9.

  • optimization_count (int, optional) – The number \(N\) of independent randomly seeded optimizations to perform. The function returns the results from the best optimization (the one with the lowest cost). Defaults to 4. Depending on the landscape of the optimization problem, a larger value will help in finding lower costs, at the expense of prolonging computation time.

  • cost_history_scope (qctrl.dynamic.types.HistoryScope, optional) – Configuration for the scope of the returned cost history data. Use this to select if you want the history data to be returned. Defaults to no cost history data returned.

  • seed (int, optional) – Seed for the random number generator used in the optimizer. If set, must be non-negative. Use this option to generate deterministic results from the optimizer. Note that if your graph contains nodes generating random samples, you need to also set seeds for those nodes to ensure a reproducible result.

Returns:

The result of an optimization. It includes the minimized cost \(C(\mathbf v_\mathrm{optimized})\), and the outputs \(\{s_j: F_j(\mathbf v_\mathrm{optimized})\}\) corresponding to the variables that achieve that minimum cost.

Return type:

qctrl.dynamic.types.optimization.Result

See also

calculate_gradient_free_optimization

Perform gradient-free optimization of generic real-valued functions.

calculate_graph

Evaluate generic functions.

calculate_stochastic_optimization

Perform gradient-based stochastic optimization of generic real-valued functions.

Notes

Given a cost function \(C(\mathbf v)\) of variables \(\mathbf v\), this function computes an estimate \(\mathbf v_\mathrm{optimized}\) of \(\mathrm{argmin}_{\mathbf v} C(\mathbf v)\), namely the choice of variables \(\mathbf v\) that minimizes \(C(\mathbf v)\). The function then calculates the values of arbitrary output functions \(\{F_j(\mathbf v_\mathrm{optimized})\}\) with that choice of variables.

This function represents the cost and output functions as nodes of a graph. This graph defines the input optimization variables \(\mathbf v\), and how these variables are transformed into the corresponding cost and output quantities. You build the graph from primitive nodes defined in the graphs of the Q-CTRL Python package. Each such node, which can be identified by a name, represents a function of the previous nodes in the graph (and thus, transitively, a function of the input variables). You can use any named scalar real-valued node \(s\) as the cost function, and any named nodes \(\{s_j\}\) as outputs.

After you provide a cost function \(C(\mathbf v)\) (via a graph), this function runs an optimization process \(N\) times, each with random initial variables, to identify \(N\) local minima of the cost function, and then takes the variables corresponding to the best such minimum as \(\mathbf v_\mathrm{optimized}\).

A common use-case for this function is to determine controls for a quantum system that yield an optimal gate: the variables \(\mathbf v\) parameterize the controls to be optimized, and the cost function \(C(\mathbf v)\) is the operational infidelity describing the quality of the resulting gate relative to a target gate. When combined with the node definitions in the Q-CTRL Python package, which make it convenient to define such cost functions, this function provides a highly configurable framework for quantum control that encapsulates other common tools such as gradient ascent pulse engineering [1] and chopped random basis (CRAB) optimization [2].

The optimizer will carry out the gradient-based optimization until it converges to a minimum. You can set up early stop conditions such as a target cost or a maximum number of cost evaluations via the target_cost and max_iteration_count parameters. If both are provided, the optimizer will terminate when either of the two conditions is satisfied.

References

Examples

Perform a simple optimization with variables that are initialized to given values.

>>> graph = qctrl.create_graph()
>>> variables = graph.optimization_variable(
...     2, -1, 1, initial_values=np.array([0.6, 0.]), name="variables"
... )
>>> x, y = variables[0], variables[1]
>>> cost = (x - 0.5) ** 2 + (y - 0.1) ** 2
>>> cost.name = "cost"
>>> result = qctrl.functions.calculate_optimization(
...     graph=graph, cost_node_name="cost", output_node_names=["variables"]
... )
>>> result.cost, result.output
    (0.0, {'variables': {'value': array([0.5, 0.1])}})

See also the How to optimize controls in arbitrary quantum systems using graphs user guide.