run_stochastic_optimization

boulderopal.run_stochastic_optimization(graph, cost_node_name, output_node_names, optimizer=None, iteration_count=1000, target_cost=None, cost_history_scope=HistoryScope.NONE, seed=None)

Perform gradient-based stochastic optimization of generic real-valued functions.

Use this function to determine a choice of variables that minimizes the value of a stochastic scalar real-valued cost function of those variables. You express that cost function as a graph describing how the input variables and stochastic variables are transformed into the cost value.

Parameters:
  • graph (Graph) – The graph describing the cost \(C(\mathbf v, \boldsymbol \beta)\) and outputs \(\{F_j(\mathbf v)\}\) as functions of the optimizable input variables \(\mathbf v\). The graph must contain nodes with names \(s\) (giving the cost function) and \(\{s_j\}\) (giving the output functions).

  • cost_node_name (str) – The name \(s\) of the real-valued scalar graph node that defines the cost function \(C(\mathbf v, \boldsymbol \beta)\) to be minimized.

  • output_node_names (str or list[str]) – The names \(\{s_j\}\) of the graph nodes that define the output functions \(\{F_j(\mathbf v)\}\). The function evaluates these using the optimized variables and returns them in the output. If any of the output nodes depend on random nodes, the random values used to calculate the output might not correspond to the values used to calculate the final cost.

  • optimizer (Adam or str or None) – The optimizer used for the stochastic optimization. You can either create an Adam optimizer for this parameter, or pass the string from a previous optimization result that represents state of the optimizer (the value with the key state in the returned dictionary from this function) to restart the optimization. Note that you can only resume a previous optimization if you pass the same graph. Defaults to a new Adam optimizer.

  • iteration_count (int) – The number \(N\) of iterations the optimizer performs until it halts. The function returns the results from the best iteration (the one with the lowest cost). Defaults to 1000.

  • target_cost (float or None, Optional) – A target value of the cost that you can set as an early stop condition for the optimizer. If the cost becomes equal or smaller than this value, the optimization halts. Defaults to None, which means that this function runs until the iteration_count is reached.

  • cost_history_scope (HistoryScope, Optional) – Configuration for the scope of the returned cost history data. Use this to select how you want the history data to be returned. Defaults to no cost history data returned.

  • seed (int or None, optional) – Seed for the random number generator used in the optimizer. If set, must be a non-negative integer. Use this option to generate deterministic results from the optimizer. Note that if your graph contains nodes generating random samples, you need to also set seeds for those nodes to ensure a reproducible result.

Returns:

A dictionary containing the optimization result, with the following keys:

cost

The minimum cost function value \(C(\mathbf v_\mathrm{optimized})\) achieved across all optimization iterations.

output

The dictionary giving the value of each requested output node, evaluated at the optimized variables, namely \(\{s_j: F_j(\mathbf v_\mathrm{optimized})\}\). The keys of the dictionary are the names \(\{s_j\}\) of the output nodes. If any of the output nodes depend on random nodes, the random values used to calculate the output might not correspond to the values used to calculate the best cost.

state

The encoded optimizer state. You can use this parameter to resume the optimization from the current step.

cost_history

The evolution of the cost function across all optimization iterations.

metadata

Metadata associated with the calculation. No guarantees are made about the contents of this metadata dictionary; the contained information is intended purely to help interpret the results of the calculation on a one-off basis.

Return type:

dict

See also

boulderopal.stochastic.Adam

Create an Adam optimizer for stochastic optimization.

boulderopal.closed_loop.optimize

Run a closed-loop optimization to find a minimum of the given cost function.

boulderopal.closed_loop.step

Perform a single step in a closed-loop optimization.

boulderopal.execute_graph

Evaluate generic functions.

boulderopal.run_gradient_free_optimization

Perform model-based optimization without using gradient values.

boulderopal.run_optimization

Perform gradient-based deterministic optimization of generic real-valued functions.

Notes

Given a cost function \(C(\mathbf v, \boldsymbol \beta)\) of optimization variables \(\mathbf v\) and stochastic variables \(\boldsymbol \beta\), this function computes an estimate \(\mathbf v_\mathrm{optimized}\) of \(\mathrm{argmin}_{\mathbf v} C(\mathbf v, \boldsymbol \beta)\), namely the choice of variables \(\mathbf v\) that minimizes \(C(\mathbf v, \boldsymbol \beta)\) with noise through the stochastic variables \(\boldsymbol \beta\). The function then calculates the values of arbitrary output functions \(\{F_j(\mathbf v_\mathrm{optimized})\}\) with that choice of variables.

This function represents the cost and output functions as nodes of a graph. This graph defines the input variables \(\mathbf v\) and stochastic variables \(\boldsymbol \beta\), and how these variables are transformed into the corresponding cost and output quantities. You build the graph from primitive nodes defined in the Graph object. Each such node, which can be identified by a name, represents a function of the previous nodes in the graph (and thus, transitively, a function of the input variables). You can use any named scalar real-valued node \(s\) as the cost function, and any named nodes \(\{s_j\}\) as outputs.

After you provide a cost function \(C(\mathbf v, \boldsymbol \beta)\) (via a graph), this function runs the optimization process for \(N\) iterations, each with random stochastic variables, to identify local minima of the stochastic cost function, and then takes the variables corresponding to the best such minimum as \(\mathbf v_\mathrm{optimized}\).

Note that this function only performs a single optimization run. That means, if you provide lists of initial values for optimization variables in the graph, only the first one for each variable will be used.

A common use-case for this function is to determine controls for a quantum system that yield an optimal gate subject to noise: the variables \(\mathbf v\) parameterize the controls to be optimized, and the cost function \(C(\mathbf v, \boldsymbol \beta)\) is the operational infidelity describing the quality of the resulting gate relative to a target gate with noise through the stochastic variables \(\boldsymbol \beta\). When combined with the node definitions in the Boulder Opal package, which make it convenient to define such cost functions, this function provides a highly configurable framework for quantum control that encapsulates other common tools such as batch gradient ascent pulse engineering [1].

References

Examples

Perform a simple stochastic optimization.

>>> graph = bo.Graph()
>>> x = graph.optimization_variable(1, -1, 1, name="x")
>>> cost = (x - 0.5) ** 2
>>> cost.name = "cost"
>>> result = bo.run_stochastic_optimization(
...     graph=graph, cost_node_name="cost", output_node_names="x"
... )
>>> result["cost"], result["output"]
    (0.0, {'x': {'value': array([0.5])}})

To have a better understanding of the optimization landscape, you can use the cost_history_scope parameter to retrieve the cost history information from the optimizer. See the reference for the available options. For example, to retrieve all available history information:

>>> history_result = bo.run_stochastic_optimization(
...     graph=graph,
...     cost_node_name="cost",
...     output_node_names="x",
...     cost_history_scope="ALL",
... )

You can then access the history information from the cost_history key. We only show here the last two records to avoid a lengthy output.

>>> history_result["cost_history"]["iteration_values"][-2:]
    [1.9721522630525295e-31, 1.9721522630525295e-31]
>>> history_result["cost_history"]["historical_best"][-2:]
    [0.0, 0.0]

See also the How to optimize controls robust to strong noise sources user guide.