Choosing a control-design (optimization) strategy in Boulder Opal

An overview of choices and tradeoffs in control design for your quantum system

Control design is a key activity in quantum control—whether it's designing a new error-robust quantum logic gate or crafting a control strategy to efficiently navigate a complicated quantum system. Boulder Opal provides the most comprehensive and well-tested suite of tools to perform numeric optimization of arbitrary controls in quantum systems.

Identifying which strategy is right for you depends on the details of your system and your control constraints. For example:

  • Do you have a model of your system that you can use to optimize controls offline? Or are there substantial uncertainties in your understanding of the system that are difficult to capture in a mathematical model?
  • Does your system experience no noise? Weak noise? Strong noise?
  • Is your optimization constrained to just a few parameters? Or is it a high-dimensional problem?

The answers to these questions can help you select a strategy that's right for your needs, as sketched by the simple flow chart below. For a detailed technical discussion of optimization in quantum control see Sec. III of "Software tools for quantum control: Improving quantum computer performance through noise and error suppression."

choosing-a-control-design-optimization-strategy-in-boulder-opal.svg-1

Robust control

Robust control is an extremely effective approach to control optimization for both unitary evolution and state preparation in quantum systems. The starting point is a mathematical model of the system used to create optimized solutions offline for later deployment. Robust control may involve a range of numeric techniques in order to achieve a target operation or system evolution which minimizes a desired cost function. By design it is resilient against imperfections in the system model and has been shown to deliver substantial performance enhancements in real quantum computers. For these reasons we recommend that any model-based control optimization you pursue should begin first with robust control.

In Boulder Opal robust-control optimization may be conveniently encoded in a graph-based cost-minimization. Designing optimized controls with noise robustness is achieved by including in the cost or objective function an infidelity metric that captures both the quality of the operation relative to a target, and also the impact of noise on your system. You can define the noise in a variety of ways—from fluctuations on the control amplitudes used in your system to ambient quasi-static processes such as dephasing.

Full details on how to encode this optimization structure are captured in our robust optimization tutorial. You can also explore our user guides to learn how to include symmetries, nonlinearities, bandwidth limits/smoothing, and other constraints using computational graphs. There are also special approaches available for optimizing controls on large systems.

boulder-opal-computational-graphs.svg-2

Robust control with perturbative cost functions and weak noise

In Boulder Opal you can use convenience functions that automatically calculate a candidate control's infidelity in the presence of noise based on the lowest-order filter function approximation method (See Sec. III.B of “Software tools for quantum control” for mathematical detail.) This approach is validated to provide extremely accurate predictions for system evolution in the weak-noise limit (typically encountered when errors are <1% in quantum logic).

You can learn how to integrate these costs into a graph-based optimization in our tutorial.

Robust control using stochastic optimization with multiple strong noise sources

An alternate model-based robust-control optimization approach calculates the cost function by incorporating noise terms directly into a simulation of the system and averaging to find the resultant fidelity. This approach obviates potential issues arising when the noise is strong enough to formally violate the low-order approximations typically used in perturbative calculations of the filter function or Magnus expansion.

You can also incorporate open-system dynamics into a stochastic optimization using a Lindblad-operator formalism, as all open-systems tools in Boulder Opal are graph based.

You can learn how to perform stochastic-optimization directly through our stochastic optimization user guide.

Robust control using gradient-free optimization

Boulder Opal also provides a gradient-free optimizer which can be directly applied to model-based control optimization for arbitrary-dimensional quantum systems. The gradient-free optimizer is useful in cases where the gradient is either very costly to compute or inaccessible (for example if the graph includes an operation that does not allow gradients). Also since the gradient is not computed, the memory requirements for the gradient-free optimization are much lower.

You can learn how to use the gradient-free optimizer through our gradient-free optimization user guide.

Optimal control

Optimal control with an exact unchanging model

Optimal control is routinely encountered in circumstances where you have an exact mathematical model of your system and that model is unchanging. The approach to optimization is similar to robust control; an optimal control solution may be achieved by simply defining the operational fidelity of a task and omitting any noise fidelities linked to the infidelity calculation.

Our optimal control package allows you to incorporate open-system dynamics into a cost function using a Lindblad-operator formalism, as all open-systems tools in Boulder Opal are graph based.

WARNING: Optimal control can be extremely sensitive to small changes in the model; any fluctuations in the underlying parameters of the system model can make optimal solutions not only less effective, but in some cases completely incorrect. For this reason it's recommended that you instead rely on robust control when possible.

Closed-loop experimental optimization

In contrast to model-based optimization, closed-loop experimental optimization involves calculation of a cost function by direct interaction with your experimental system. The optimizer iteratively modifies candidate solutions and experimental measurement results of a suitable form are returned to the optimizer as it decides the next candidate solution to test.

Because this approach does not rely at all on a Hamiltonian or other mathematical description of your quantum system, it works extremely well not only when there may be noise in your system, but also when you have unknown couplings or energy levels in your system or signal distortions arising from either transmission lines or imperfect signal generation.

boulder-opal-workflow-and-core-capabilities.svg-3

Well designed measurements on the system reveal only the information necessary for the optimizer to converge on a high-performing candidate solution. As you will see in our experimental demonstrations on real quantum computers, “well designed” measurements will generally involve SPAM-mitigation strategies like application of repeated candidate solutions and techniques to manage contextual errors like averaging over different numbers of solution repetitions.

There are multiple approaches to closed-loop experimental optimization described below.

Black-box automated optimization with an unknown model and strong noise

The simplest approach to closed-loop optimization is to employ “black box automated optimization” in which an agent only tries to minimize an objective or cost function through iterative interaction with your quantum system. It typically works well to construct solutions using discretized piecewise-constant functions (any signal distortions from sharp transitions are captured directly in the measurement process), but you can just as easily create solutions as sums of basis functions. Black box automated optimization is generally the best performing option in cases where the noise, distortion, or other uncertainty in your system may be quite large.

There are several optimizers available, all of which are based on the same fundamental process. At each iteration they accept a set of test points and then return a new set of points, which explore the objective function landscape. After several iterations the distribution of generated test points should converge to low values of the objective function.

You can learn how to execute an automated optimization through our user guide, and also see how you can incorporate convenience tools to aid data management. The range of optimizers, which work best in different circumstances, are:

Simulated annealing

The simulated annealing method is based on non-greedy random search with varying trust region. It is best for high-dimensional problems (search parameters > 20) such as pulse shaping with many basis functions or time segments.

Gaussian processes

The Gaussian processes method is an example of a surrogate model optimizer that uses the collected data to estimate a model of the underlying optimization landscape. It is best suited for small optimization problems such as gate calibration where only a few parameters are optimized (search parameters ~ 6).

Neural networks

The neural networks method is conceptually similar to Gaussian processes but uses a sampled neural network to model the optimization landscape. While it is typically less reliable than Gaussian processes, each computational step can be much faster, and this may be a good option for high dimensional optimizations.

Covariance matrix adaptation evolution strategy (CMA-ES)

The CMA-ES method is a stochastic search on a multivariate normal distribution for real-parameter optimization. With the large entropy of the search distribution and non-distinguished coordinate directions it is best suited for rugged search landscapes and non-separable problems.

Was this useful?

cta background

New to Boulder Opal?

Get access to everything you need to automate and optimize quantum hardware performance at scale.