How to automate closed-loop hardware optimization

Closed-loop optimization without complete system models

The Q-CTRL Python package contains automated closed-loop optimization tools that do not require a complete understanding of the workings of your quantum system. These tools allow you to run optimizations incorporating input data rather than building a detailed model of all aspects of your system. This notebook will show how you can find optimized solutions using Q-CTRL's closed-loop optimization package directly interacting with an experiment (here, simulated).

Closed-loop optimization framework

You can use the automated closed-loop optimizers to create a closed optimization loop where the optimizer communicates with the experimental apparatus without your direct involvement. In this kind of setting, your experimental apparatus produces an initial set of results, which it sends to the optimizer. Using this information, the optimizer produces a set of improved test points that it recommends back to the experimental apparatus. The results corresponding to these test points are resent to the optimizer, and the cycle repeats itself until any of the results has a sufficiently low cost function value, or until it meets any other ending condition that you imposed. This setup is illustrated in the figure below. test.png

Summary workflow of closed-loop optimization

1. Identify data source

A closed-loop optimization attempts to find a set of parameters that are capable of minimizing the value of a cost function. The parameters are real numbers that represent quantities that you can control and change from experiment to experiment. This means that the exact nature of the parameters depends on the problem that you are trying to solve, but in the context of quantum control they typically represent the values of a piecewise-constant control pulse.

From the result of each experiment, you must obtain a cost, a real number that represents a quantity that you want to minimize. The exact nature of the cost also depends on the problem you want to solve, but in the context of quantum control the cost will typically be an infidelity with respect to the ideal result of the operation.

When you collect the data, you will provide each set of parameters with an associated cost (and possibly a cost_uncertainty) to the closed-loop optimization function in the form of a CostFunctionResult object:

qctrl.types.closed_loop_optimization_step.CostFunctionResult(parameters, cost, cost_uncertainty)

2. Set up an interface with the experiment

During the course of the closed-loop optimization, the optimizer will request that you obtain more data points by executing experiments using new sets of parameters and returning the resulting costs. It will be convenient for you to set up a function that interfaces with your experimental apparatus by accepting sets of parameters and returning the corresponding costs. The nature of this interface will depend on your experimental apparatus.

Establish experimental batching

In some cases it is advantageous to configure the optimizer to accept multiple test point/measurement pairs in each step - such as measurements with substantial communications latency on a cloud quantum computer. If your apparatus does not support this type of batching, you can achieve a simpler integration by using M-LOOP, which handles the optimization loop for you.

In this case, you can use an Interface object from M-LOOP to call your experimental apparatus, and a QctrlController from the Q-CTRL M-LOOP integration package to send the retrieved data to Boulder Opal. A simple setup would look like the following (where the settings passed to Interface and Optimizer depend on the optimization that you want to perform):

qctrl = Qctrl()
interface = Interface(<interface settings>)
optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(<optimizer settings>)
controller = QctrlController(
    interface, qctrl=qctrl, optimizer=optimizer, target_cost=0.01, num_params=1,
)
controller.optimize()

3. Configure closed-loop optimization

Determine initial seed

Before starting the closed-loop optimization, you will need to collect a few initial results for the optimizer to use as a starting point. Select a range of parameter sets that are valid for the problem that you are considering and run the experiment with them. In the case of quantum control, this will typically mean subjecting the qubits to a range of pulse shapes and retrieving the associated infidelities. Store the results in a list of CostFunctionResult objects.

Select and initialize the optimizer

Initialize the optimizer with the configuration that is appropriate for your experiment. The documentation of the qctrl.types.closed_loop_optimization_step namespace contains information about all available initializer objects which establish the formal engine to be employed.

You must then pass this initializer object to an instance of the qctrl.types.closed_loop_optimization_step.Optimizer, which is an object that keeps track of the settings and current state of the optimization. Details of all available Q-CTRL automated optimizers are available in our optimizer documentation. Your first instance of the Optimizer object receives the initializer of the method that you chose, while subsequent instances just need to receive the argument state, which is a binary object where the automated closed-loop optimizer stores the current state of the optimization. Note that you must pass exactly one argument to the Optimizer at a time.

4. Execute optimization

You can start the loop that performs the optimization by calling the function qctrl.functions.calculate_closed_loop_optimization_step repeatedly inside a while loop conditioned on the achieved cost value. Provide the latest results of your experiments as a list of CostFunctionResult objects every time that you call this function. This function then returns an updated state that you can pass to the next instance of the Optimizer, and provides a new list of parameters to try. After you run another set of experiments, the automated closed-loop optimizer is called again, and the cycle repeats until you have reached the desired value of the cost.

Worked example: Designing an optimal control for a qubit subject to unknown control operators using Gaussian processes

Consider a qubit whose precise Hamiltonian is unknown to you. Specifically, suppose that you want to create an optimized X gate but your Hamiltonian contains unknown terms:

$$ H(t) = \frac{\Omega(t)}{2} \left( \sigma_x + Q_\text{unknown} \right). $$

In the previous equation, $\Omega(t)$ define the control pulses, and $Q_\textrm{unknown}$ are extra unknown terms that appear when applying your control. This worked example shows how you can find the optimal pulse for this system without having to ever learn the form of this extra term.

Identify data source

In this example, the data source is an experimental setup where you apply your pulses. In this case, the parameters are the values of $\Omega(t)$ in a piecewise-constant pulse and the cost is the value of the infidelity with respect to the desired X gate. The function in the following code block bundles this data in the format used by the closed-loop optimizer.

import matplotlib.pyplot as plt
import numpy as np
from qctrlvisualizer import plot_controls

from qctrl import Qctrl

# Start a session with the API.
qctrl = Qctrl()

# Define standard deviation of the errors in the experimental results.
sigma = 0.01

# Function to organize the experiment results into the proper input format.
def organize_results(omegas, infidelities):
    """
    This function accepts a list of parameters and a list of costs, and
    orgnizes them in a format that is accepted by the closed-loop optimizer.
    The uncertainties in the cost are assumed to be equal.
    """
    return [
        qctrl.types.closed_loop_optimization_step.CostFunctionResult(
            parameters=list(parameters), cost=cost, cost_uncertainty=sigma
        )
        for parameters, cost in zip(omegas, infidelities)
    ]

Set up an interface with the experiment

In a practical situation, you'll be obtaining the data from your experimental equipment. In this example, the experimental results are simulated using Boulder Opal. In either case, you'll need a function that accepts the $\Omega(t)$ parameters that you pass and returns the corresponding infidelities, which act as the cost.

# Define standard matrices.
sigma_x = np.array([[0, 1], [1, 0]], dtype=complex)
sigma_y = np.array([[0, -1j], [1j, 0]], dtype=complex)
sigma_z = np.array([[1, 0], [0, -1]], dtype=complex)

# Define control parameters.
duration = 1e-6  # s

# Create a random unknown operator.
rng = np.random.default_rng(seed=10)
phi = rng.uniform(-np.pi, np.pi)
u = rng.uniform(-1, 1)
Q_unknown = (
    u * sigma_z + np.sqrt(1 - u ** 2) * (np.cos(phi) * sigma_x + np.sin(phi) * sigma_y)
) / 4

# Establish simulation model of quantum system for use in optimization loop
def run_experiments(omegas):
    """
    Simulates a series of experiments where controls `omegas` attempt to apply
    an X gate to a system. The result of each experiment is the infidelity plus
    a Gaussian error.

    In your actual implementation, this function would run the experiment with
    the parameters passed. Note that the simulation handles multiple test points,
    while your experimental implementation might need to queue the test point
    requests to obtain one at a time from the apparatus.
    """
    # Create the graph with the dynamics of the system.
    graph = qctrl.create_graph()
    signal = graph.pwc_signal(values=omegas, duration=duration)
    graph.infidelity_pwc(
        hamiltonian=0.5 * signal * (sigma_x + Q_unknown),
        target=graph.target(operator=sigma_x),
        name="infidelities",
    )

    # Run the simulation.
    result = qctrl.functions.calculate_graph(
        graph=graph, output_node_names=["infidelities"]
    )

    # Add error to the measurement.
    error_values = rng.normal(loc=0, scale=sigma, size=len(omegas))
    infidelities = result.output["infidelities"]["value"] + error_values

    # Return only infidelities between 0 and 1.
    return np.clip(infidelities, 0, 1)

Configure closed-loop optimization

Determine initial seed

After setting up the experimental interface, you need to obtain a set of initial results. You will use these as the initial input for the automated closed-loop optimization algorithm.

The following code simulates the experiment with different controls to obtain 20 initial results, including one set of controls that would create the desired gate if no extra terms were present in the Hamiltonian.

# Define the number of test points obtained per run.
test_point_count = 20

# Define number of segments in the control.
segment_count = 10


def initialize_parameter_set():
    parameter_set = (
        (np.pi / duration)
        * (np.linspace(-1, 1, test_point_count)[:, None])
        * np.ones((test_point_count, segment_count))
    )

    return parameter_set


# Define parameters as a set of controls with piecewise constant segments.
parameter_set = initialize_parameter_set()

# Obtain a set of initial experimental results.
experiment_results = run_experiments(parameter_set)
Your task calculate_graph (action_id="617941") has completed.

Select and initialize the optimizer

This example uses the object qctrl.types.closed_loop_optimization_step.GaussianProcessInitializer to set up an automated closed-loop optimization that uses the Gaussian process method (GP). You can use analogous objects to initialize other methods of optimization, although the set of arguments will vary with the method.

# Define initialization object for the automated closed-loop optimization.
length_scale_bound = qctrl.types.closed_loop_optimization_step.BoxConstraint(
    lower_bound=1e-5, upper_bound=1e5
)
bound = qctrl.types.closed_loop_optimization_step.BoxConstraint(
    lower_bound=-5 * np.pi / duration, upper_bound=5 * np.pi / duration
)
initializer = qctrl.types.closed_loop_optimization_step.GaussianProcessInitializer(
    length_scale_bounds=[length_scale_bound] * segment_count,
    bounds=[bound] * segment_count,
    rng_seed=0,
)

# Define state object for the closed-loop optimization.
optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(
    gaussian_process_initializer=initializer,
)

Execute optimization

best_cost, best_controls = min(
    zip(experiment_results, parameter_set), key=lambda params: params[0]
)
optimization_count = 0

# Run the optimization loop until the cost (infidelity) is sufficiently small.
while best_cost > 3 * sigma:
    # Print the current best cost.
    optimization_steps = (
        "optimization step" if optimization_count == 1 else "optimization steps"
    )
    print(
        f"Best infidelity after {optimization_count} Boulder Opal {optimization_steps}: {best_cost}"
    )

    # Organize the experiment results into the proper input format.
    results = organize_results(parameter_set, experiment_results)

    # Call the automated closed-loop optimizer and obtain the next set of test points.
    optimization_result = qctrl.functions.calculate_closed_loop_optimization_step(
        optimizer=optimizer, results=results, test_point_count=test_point_count
    )
    optimization_count += 1

    # Organize the data returned by the automated closed-loop optimizer.
    parameter_set = np.array(
        [test_point.parameters for test_point in optimization_result.test_points]
    )
    optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(
        state=optimization_result.state
    )

    # Obtain experiment results that the automated closed-loop optimizer requested.
    experiment_results = run_experiments(parameter_set)

    # Record the best results after this round of experiments.
    cost, controls = min(
        zip(experiment_results, parameter_set), key=lambda params: params[0]
    )
    if cost < best_cost:
        best_cost = cost
        best_controls = controls

# Print final best cost.
print(f"Infidelity: {best_cost}")

# Plot controls that correspond to the best cost.
plot_controls(
    figure=plt.figure(),
    controls={
        r"$\Omega(t)$": [
            {"duration": duration / len(best_controls), "value": value}
            for value in best_controls
        ]
    },
)
Best infidelity after 0 Boulder Opal optimization steps: 0.10028633187622513
Your task calculate_closed_loop_optimization_step (action_id="617942") has completed.

Your task calculate_graph (action_id="617943") has completed.
Best infidelity after 1 Boulder Opal optimization step: 0.05410177428273324
Your task calculate_closed_loop_optimization_step (action_id="617944") has completed.

Your task calculate_graph (action_id="617945") has completed.
Best infidelity after 2 Boulder Opal optimization steps: 0.05410177428273324
Your task calculate_closed_loop_optimization_step (action_id="617946") has completed.

Your task calculate_graph (action_id="617947") has completed.
Best infidelity after 3 Boulder Opal optimization steps: 0.03588406627341298
Your task calculate_closed_loop_optimization_step (action_id="617948") has completed.

Your task calculate_graph (action_id="617949") has completed.
Best infidelity after 4 Boulder Opal optimization steps: 0.03588406627341298
Your task calculate_closed_loop_optimization_step (action_id="617950") has completed.

Your task calculate_graph (action_id="617951") has completed.
Best infidelity after 5 Boulder Opal optimization steps: 0.03588406627341298
Your task calculate_closed_loop_optimization_step (action_id="617952") has completed.

Your task calculate_graph (action_id="617953") has completed.
Infidelity: 0.0195805238032383

Summary of the GP optimizer

The Gaussian process (GP) optimization tool allows you to obtain optimal controls without complete knowledge about the dynamics of the system. The Gaussian process optimizer is just one of the several optimizers offered in the Q-CTRL Python package.

The following section demonstrates the same optimization task with a different closed-loop optimizer—simulated annealing (SA).

Worked example: Designing an optimal control for a qubit subject to unknown control operators using simulated annealing

In this section we employ the same model as above but use the object qctrl.types.closed_loop_optimization_step.SimulatedAnnealingInitializer to set up an automated closed-loop optimization that uses the simulated annealing (SA) process. The documentation of the qctrl.types.closed_loop_optimization_step namespace contains information about all the initializer objects.

As before, you must pass this initializer object to an instance of the qctrl.types.closed_loop_optimization_step.Optimizer, which is an object that keeps track of the settings and current state of the optimization. Your first instance of the Optimizer object receives the initializer of the method that you chose (in this case, as the simulated_annealing_initializer argument), while subsequent instances just need to receive the argument state, which is a binary object where the automated closed-loop optimizer stores the current state of the optimization.

Configure closed-loop optimization

Determine initial seed

# Seed optimization with a random initial guess via reinitialization.
# Define parameters as a set of controls with piecewise constant segments.
parameter_set = initialize_parameter_set()

# Obtain a set of initial experimental results.
experiment_results = run_experiments(parameter_set)
Your task calculate_graph (action_id="617954") has completed.

Select and initialize the optimizer

One notable difference between GP and SA is the use of temperatures and temperature_cost. These simulated annealing hyperparameters control the overall exploration and greediness of the optimizer, respectively. More difficult optimization problems typically require higher temperatures because high fidelity controls tend to vary greatly from the initial guesses provided to the optimizer.

In real life problems, determining the optimal choice of temperatures and temperature_cost is generally not feasible or necessary. Some level of searching usually needs to be done on the part of the user. Here, the temperatures have been set to 400000 after testing temperatures of varying magnitude, i.e. 400, 4000, 40000, .... Such a search is often easily parallelizable, and heuristically, temperatures one order of magnitude smaller than the provided bound tend to be a decent starting point for a search. Similar heuristics apply to the temperature_cost, where starting a grid search approximately one order of magnitude smaller than the range of the cost tends to be a decent starting point. For additional hyperparameter tuning methods, visit the Wikipedia article on Hyperparameter optimization.

# Define initialization object for the simulated annealing optimizer.
bound = qctrl.types.closed_loop_optimization_step.BoxConstraint(
    lower_bound=-5 * np.pi / duration, upper_bound=5 * np.pi / duration
)

initializer = qctrl.types.closed_loop_optimization_step.SimulatedAnnealingInitializer(
    bounds=[bound] * segment_count,
    temperatures=[400000] * segment_count,
    temperature_cost=0.25,
    rng_seed=0,
)

# Define state object for the closed-loop optimization.
optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(
    simulated_annealing_initializer=initializer
)

Execute optimization

best_cost, best_controls = min(
    zip(experiment_results, parameter_set), key=lambda params: params[0]
)
optimization_count = 0

# Run the optimization loop until the cost (infidelity) is sufficiently small.
while best_cost > 3 * sigma:
    # Print the current best cost.
    optimization_steps = (
        "optimization step" if optimization_count == 1 else "optimization steps"
    )
    print(
        f"Best infidelity after {optimization_count} Boulder Opal {optimization_steps}: {best_cost}"
    )

    # Organize the experiment results into the proper input format.
    results = organize_results(parameter_set, experiment_results)

    # Call the automated closed-loop optimizer and obtain the next set of test points.
    optimization_result = qctrl.functions.calculate_closed_loop_optimization_step(
        optimizer=optimizer, results=results, test_point_count=test_point_count
    )
    optimization_count += 1

    # Organize the data returned by the automated closed-loop optimizer.
    parameter_set = np.array(
        [test_point.parameters for test_point in optimization_result.test_points]
    )
    optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(
        state=optimization_result.state
    )

    # Obtain experiment results that the automated closed-loop optimizer requested.
    experiment_results = run_experiments(parameter_set)

    # Record the best results after this round of experiments.
    cost, controls = min(
        zip(experiment_results, parameter_set), key=lambda params: params[0]
    )
    if cost < best_cost:
        best_cost = cost
        best_controls = controls

# Print final best cost.
print(f"Infidelity: {best_cost}")

# Plot controls that correspond to the best cost.
plot_controls(
    figure=plt.figure(),
    controls={
        r"$\Omega(t)$": [
            {"duration": duration / len(best_controls), "value": value}
            for value in best_controls
        ]
    },
)
Best infidelity after 0 Boulder Opal optimization steps: 0.08598060967179108
Your task calculate_closed_loop_optimization_step (action_id="617955") has completed.

Your task calculate_graph (action_id="617956") has completed.
Best infidelity after 1 Boulder Opal optimization step: 0.08598060967179108
Your task calculate_closed_loop_optimization_step (action_id="617957") has completed.

Your task calculate_graph (action_id="617958") has completed.
Best infidelity after 2 Boulder Opal optimization steps: 0.08598060967179108
Your task calculate_closed_loop_optimization_step (action_id="617959") has completed.

Your task calculate_graph (action_id="617960") has completed.
Best infidelity after 3 Boulder Opal optimization steps: 0.038766020049559265
Your task calculate_closed_loop_optimization_step (action_id="617961") has completed.

Your task calculate_graph (action_id="617962") has completed.
Infidelity: 0.01995445170527204

Summary

The automated closed-loop optimization tools from the Q-CTRL Python package allow you to obtain optimal controls even without complete knowledge about the dynamics of the system. These examples demonstrate that the various optimizers obtain optimized controls capable of yielding low infidelity controls without any explicit assumptions about the Hamiltonian.