run_stochastic_optimization

boulderopal.run_stochastic_optimization(graph, cost_node_name, output_node_names, optimizer=None, iteration_count=1000, target_cost=None, cost_history_scope=HistoryScope.NONE, seed=None)

Perform gradient-based stochastic optimization of generic real-valued functions.

Use this function to determine a choice of variables that minimizes the value of a stochastic scalar real-valued cost function of those variables. You express that cost function as a graph describing how the input variables and stochastic variables are transformed into the cost value.

Parameters

  • graph (Graph) – The graph describing the cost C(v,β)C(\mathbf v, \boldsymbol \beta) and outputs {Fj(v)}\{F_j(\mathbf v)\} as functions of the optimizable input variables v\mathbf v. The graph must contain nodes with names ss (giving the cost function) and {sj}\{s_j\}
  • cost_node_name (str) – The name ss of the real-valued scalar graph node that defines the cost function C(v,β)C(\mathbf v, \boldsymbol \beta)
  • output_node_names (str or list [ str ]) – The names {sj}\{s_j\} of the graph nodes that define the output functions {Fj(v)}\{F_j(\mathbf v)\}
  • optimizer (Adam or str or None) – The optimizer used for the stochastic optimization. You can either create an Adam optimizer for this parameter, or pass the string from a previous optimization result that represents state of the optimizer (the value with the key state in the returned dictionary from this function) to restart the optimization. Note that you can only resume a previous optimization if you pass the same graph. Defaults to a new Adam optimizer.
  • iteration_count (int) – The number NN
  • target_cost (float or None , Optional) – A target value of the cost that you can set as an early stop condition for the optimizer. If the cost becomes equal or smaller than this value, the optimization halts. Defaults to None, which means that this function runs until the iteration_count is reached.
  • cost_history_scope (HistoryScope , Optional) – Configuration for the scope of the returned cost history data. Use this to select how you want the history data to be returned. Defaults to no cost history data returned.
  • seed (int or None , optional) – Seed for the random number generator used in the optimizer. If set, must be a non-negative integer. Use this option to generate deterministic results from the optimizer. Note that if your graph contains nodes generating random samples, you need to also set seeds for those nodes to ensure a reproducible result.

Returns

A dictionary containing the optimization result, with the following keys:

cost : The minimum cost function value C(voptimized)C(\mathbf v_\mathrm{optimized})

output : The dictionary giving the value of each requested output node, evaluated at the optimized variables, namely {sj:Fj(voptimized)}\{s_j: F_j(\mathbf v_\mathrm{optimized})\}. The keys of the dictionary are the names {sj}\{s_j\}

state : The encoded optimizer state. You can use this parameter to resume the optimization from the current step.

cost_history : The evolution of the cost function across all optimization iterations.

metadata : Metadata associated with the calculation. No guarantees are made about the contents of this metadata dictionary; the contained information is intended purely to help interpret the results of the calculation on a one-off basis.

Return type

dict

SEE ALSO

boulderopal.stochastic.Adam : Create an Adam optimizer for stochastic optimization.

boulderopal.closed_loop.optimize : Run a closed-loop optimization to find a minimum of the given cost function.

boulderopal.closed_loop.step : Perform a single step in a closed-loop optimization.

boulderopal.execute_graph : Evaluate generic functions.

boulderopal.run_gradient_free_optimization : Perform model-based optimization without using gradient values.

boulderopal.run_optimization : Perform gradient-based deterministic optimization of generic real-valued functions.

Notes

Given a cost function C(v,β)C(\mathbf v, \boldsymbol \beta) of optimization variables v\mathbf v and stochastic variables β\boldsymbol \beta, this function computes an estimate voptimized\mathbf v_\mathrm{optimized}{\mathbf v} C(\mathbf v, \boldsymbol \beta),namelythechoiceofvariables, namely the choice of variables \mathbf vthatminimizes that minimizes C(\mathbf v, \boldsymbol \beta)withnoisethroughthestochasticvariables with noise through the stochastic variables \boldsymbol \beta.Thefunctionthencalculatesthevaluesofarbitraryoutputfunctions. The function then calculates the values of arbitrary output functions \mathrm{optimized})\}$ with that choice of variables.

This function represents the cost and output functions as nodes of a graph. This graph defines the input variables v\mathbf v and stochastic variables β\boldsymbol \beta, and how these variables are transformed into the corresponding cost and output quantities. You build the graph from primitive nodes defined in the Graph object. Each such node, which can be identified by a name, represents a function of the previous nodes in the graph (and thus, transitively, a function of the input variables). You can use any named scalar real-valued node ss as the cost function, and any named nodes {sj}\{s_j\}

After you provide a cost function C(v,β)C(\mathbf v, \boldsymbol \beta) (via a graph), this function runs the optimization process for NN iterations, each with random stochastic variables, to identify local minima of the stochastic cost function, and then takes the variables corresponding to the best such minimum as voptimized\mathbf v_\mathrm{optimized}

Note that this function only performs a single optimization run. That means, if you provide lists of initial values for optimization variables in the graph, only the first one for each variable will be used.

A common use-case for this function is to determine controls for a quantum system that yield an optimal gate subject to noise: the variables v\mathbf v parameterize the controls to be optimized, and the cost function C(v,β)C(\mathbf v, \boldsymbol \beta) is the operational infidelity describing the quality of the resulting gate relative to a target gate with noise through the stochastic variables β\boldsymbol \beta1.

References

[1] R. Wu, H. Ding, D. Dong, and X. Wang, Physical Review A 99, 042327 (2019).

Examples

Perform a simple stochastic optimization.

>>> graph = bo.Graph()
>>> x = graph.optimization_variable(1, -1, 1, name="x")
>>> cost = (x - 0.5) ** 2
>>> cost.name = "cost"
>>> result = bo.run_stochastic_optimization(
...     graph=graph, cost_node_name="cost", output_node_names="x"
... )
>>> result["cost"], result["output"]
    (0.0, {'x': {'value': array([0.5])}})

To have a better understanding of the optimization landscape, you can use the cost_history_scope parameter to retrieve the cost history information from the optimizer. See the reference for the available options. For example, to retrieve all available history information:

>>> history_result = bo.run_stochastic_optimization(
...     graph=graph,
...     cost_node_name="cost",
...     output_node_names="x",
...     cost_history_scope="ALL",
... )

You can then access the history information from the cost_history key. We only show here the last two records to avoid a lengthy output.

>>> history_result["cost_history"]["iteration_values"][-2:]
    [1.9721522630525295e-31, 1.9721522630525295e-31]
>>> history_result["cost_history"]["historical_best"][-2:]
    [0.0, 0.0]

See also the How to optimize controls robust to strong noise sources user guide.

Was this useful?