calculate_stochastic_optimization¶

static FunctionNamespace.calculate_stochastic_optimization(*, graph, cost_node_name, output_node_names, iteration_count=1000, target_cost=None, optimizer=None, cost_history_scope='NONE', **kwargs)

Perform gradient-based stochastic optimization of generic real-valued functions.

Use this function to determine a choice of variables that minimizes the value of a stochastic scalar real-valued cost function of those variables. You express that cost function as a graph describing how the input variables and stochastic variables are transformed into the cost value.

Parameters
• graph (qctrl.graphs.Graph) – The graph describing the cost $$C(\mathbf v, \boldsymbol \beta)$$ and outputs $$\{F_j(\mathbf v)\}$$ as functions of the optimizable input variables $$\mathbf v$$. The graph must contain nodes with names $$s$$ (giving the cost function) and $$\{s_j\}$$ (giving the output functions).

• cost_node_name (str) – The name $$s$$ of the real-valued scalar graph node that defines the cost function $$C(\mathbf v, \boldsymbol \beta)$$ to be minimized.

• output_node_names (List[str]) – The names $$\{s_j\}$$ of the graph nodes that define the output functions $$\{F_j(\mathbf v)\}$$. The function evaluates these using the optimized variables and returns them in the output. If any of the output nodes depend on random nodes, the random values used to calculate the output might not correspond to the values used to calculate the final cost.

• iteration_count (int, optional) – The number $$N$$ of iterations the optimizer performs until it halts. The function returns the results from the best optimization (the one with the lowest cost). Defaults to 1000.

• target_cost (float, optional) – A target value of the cost that you can set as an early stop condition for the optimizer. If the cost becomes equal or smaller than this value, the optimization halts. Defaults to None, which means that this function runs until the iteration_count is reached.

• optimizer (qctrl.dynamic.types.stochastic_optimization.Optimizer, optional) – The optimizer configuration. Defaults to Adam.

• cost_history_scope (qctrl.dynamic.types.HistoryScope, optional) – Configuration for the scope of the returned cost history data. Use this to select how you want the history data to be returned. Defaults to no cost history data returned.

Returns

The result of a stochastic optimization. It includes the minimized cost $$C(\mathbf v_\mathrm{optimized})$$, and the outputs $$\{s_j: F_j(\mathbf v_\mathrm{optimized})\}$$ corresponding to the variables that achieve that minimum cost.

Return type

qctrl.dynamic.types.stochastic_optimization.Result

calculate_optimization()

Perform gradient-based deterministic optimization of generic real-valued functions.

random_choices()

Create random samples from the data that you provide.

random_colored_noise_stf_signal()

Generate noise trajectories from power spectral density.

random_normal()

Create a sample of normally distributed random numbers.

random_uniform()

Create a sample of uniformly distributed random numbers.

Notes

Given a cost function $$C(\mathbf v, \boldsymbol \beta)$$ of optimization variables $$\mathbf v$$ and stochastic variables $$\boldsymbol \beta$$, this function computes an estimate $$\mathbf v_\mathrm{optimized}$$ of $$\mathrm{argmin}_{\mathbf v} C(\mathbf v, \boldsymbol \beta)$$, namely the choice of variables $$\mathbf v$$ that minimizes $$C(\mathbf v, \boldsymbol \beta)$$ with noise through the stochastic variables $$\boldsymbol \beta$$. The function then calculates the values of arbitrary output functions $$\{F_j(\mathbf v_\mathrm{optimized})\}$$ with that choice of variables.

This function represents the cost and output functions as nodes of a graph. This graph defines the input variables $$\mathbf v$$ and stochastic variables $$\boldsymbol \beta$$, and how these variables are transformed into the corresponding cost and output quantities. You build the graph from primitive nodes defined in the graphs of the Q-CTRL Python package. Each such node, which can be identified by a name, represents a function of the previous nodes in the graph (and thus, transitively, a function of the input variables). You can use any named scalar real-valued node $$s$$ as the cost function, and any named nodes $$\{s_j\}$$ as outputs.

After you provide a cost function $$C(\mathbf v, \boldsymbol \beta)$$ (via a graph), this function runs the optimization process for $$N$$ iterations, each with random stochastic variables, to identify local minima of the stochastic cost function, and then takes the variables corresponding to the best such minimum as $$\mathbf v_\mathrm{optimized}$$.

Note that this function only performs a single optimization run. That means, if you provide lists of initial values for optimization variables in the graph, only the first one for each variable will be used.

A common use-case for this function is to determine controls for a quantum system that yield an optimal gate subject to noise: the variables $$\mathbf v$$ parameterize the controls to be optimized, and the cost function $$C(\mathbf v, \boldsymbol \beta)$$ is the operational infidelity describing the quality of the resulting gate relative to a target gate with noise through the stochastic variables $$\boldsymbol \beta$$. When combined with the node definitions in the Q-CTRL Python package, which make it convenient to define such cost functions, this function provides a highly configurable framework for quantum control that encapsulates other common tools such as batch gradient ascent pulse engineering 1.

References

1

R. Wu, H. Ding, D. Dong, and X. Wang, Physical Review A 99, 042327 (2019).

Examples

Perform a simple stochastic optimization.

>>> graph = qctrl.create_graph()
>>> x = graph.optimization_variable(1, -1, 1, name="x")
>>> cost = (x - 0.5) ** 2
>>> cost.name = "cost"
>>> result = qctrl.functions.calculate_stochastic_optimization(
...     graph=graph, cost_node_name="cost", output_node_names=["x"]
... )

>>> result.best_cost, result.best_output
(0.0, {'x': {'value': array([0.5])}})


To have a better understanding of the optimization landscape, you can use the cost_history_scope parameter to retrieve the cost history information from the optimizer. See the reference for the available options. For example, to retrieve all available history information:

>>> history_result = qctrl.functions.calculate_stochastic_optimization(
...     graph=graph,
...     cost_node_name="cost",
...     output_node_names=["x"],
...     cost_history_scope="ALL",
... )


You can then access the history information from the cost_history attribute. We only show here the last two records to avoid a lengthy output.

>>> history_result.cost_history.iteration_values[-2:]
[1.9721522630525295e-31, 1.9721522630525295e-31]

>>> history_result.cost_history.historical_best[-2:]
[0.0, 0.0]