How to perform Hamiltonian parameter estimation using a small amount of measured data

Estimate Hamiltonian model parameters using measured data and the graph-based optimization engine

The optimization engine from Boulder Opal provides a large, modular collection of configuration options that allows it to be used for a range of tasks in quantum control, including estimating Hamiltonian parameters in a model using measured data.

In this user guide, we show how this engine can be used to determine the parameters that characterize aspects of a hardware system using a small amount of measured data. This example uses fewer than 100 data points and a single qubit. If your problem has a large amount of data, of the order of many thousands data points, and multiple qubits, consult the user guide How to perform parameter estimation using a large amount of measured data to see how to break down the data analysed into smaller batches. Read our Characterizing your hardware using system identification in Boulder Opal topic to learn more about parameter estimation, as well as which optimization engine to use depending on your system size or the amount of measured data that you have. A step-by-step lesson on how to perform system identification is also available in our system identification tutorial.

Summary workflow for Hamiltonian parameter estimation

1. Perform measurements to probe the system

In general the output of probe measurements will be estimates of various parameters, such as entries in a tomographic reconstruction of a state. Suitable probe experiments must be crafted which give access to terms relevant to a model being captured in order to constrain the optimization search space.

2. Build a graph-based optimization encoding the problem

Represent the cost function to be minimized using a graph. Define optimization parameters in the graph using graph.optimization_variable, and express the cost as the negative log-likelihood, as explained in the Characterizing your hardware using system identification in Boulder Opal topic. Execute the optimization using qctrl.functions.calculate_optimization by assigning the optimization variables to output nodes of the graph.

3. Estimate the precision of the results (optional)

Add the Hessian of the cost function to the output nodes in order to perform this additional analysis. Take the inverse of the Hessian to obtain the covariance matrix. Use the square root of this matrix to draw the confidence region of the parameter estimates, as explained in the Characterizing your hardware using system identification in Boulder Opal topic

Worked example: Identifying constant terms of a single-qubit Hamiltonian

In this example, we will consider how to estimate a constant Hamiltonian that rotates a single qubit. In this case, preparing different initial states and letting the qubit evolve for different times before measuring it will reveal information about the parameters of the qubit rotation. This setup is illustrated in the following figure. The constant single-qubit Hamiltonian whose parameters we want to estimate takes the following form:

$$H = \frac{1}{2} \left( \Omega_x \sigma_x + \Omega_y \sigma_y + \Omega_z \sigma_z \right).$$

The task of the optimizer in this case will be to determine these three parameters $\Omega_x$, $\Omega_y$, and $\Omega_z$. This can be done by passing the measurement results to the optimization engine, which then finds the parameters most likely to have generated that series of points.

In this example we will rely on simulated measurement data. We prepare these qubits in the eigenstates of $\sigma_z$, $\sigma_x$, and $\sigma_y$, and then measure the populations in eigenstates of $\sigma_y$, $\sigma_z$, and $\sigma_x$, respectively. The optimization will then seek to minimize a cost function parameterized by $\Omega_x$, $\Omega_y$, and $\Omega_z$:

$$C = \sum_i [P_i-p_i(\Omega_x, \Omega_y, \Omega_z)]^2 \,.$$

Doing so returns an estimate of the parameters that best reproduces the original dynamics of the system.

Moreover, as seen in the Characterizing your hardware using system identification in Boulder Opal topic, the Hessian of this cost allows us to find the confidence region for the parameter estimates. This confidence region represents the parameter space where the correct value of the parameters will be, with 95% probability. We can then plot the results in three two-dimensional sections, each one representing the combination of two variables that we are estimating.

import matplotlib.pyplot as plt
import matplotlib.ticker as tick
import numpy as np
from qctrlvisualizer import get_qctrl_style
from scipy.linalg import fractional_matrix_power
from scipy.special import betaincinv

from qctrl import Qctrl

plt.style.use(get_qctrl_style())

# Start a session with the API.
qctrl = Qctrl()
# Define standard matrices.
sigma_x = np.array([[0, 1], [1, 0]])
sigma_y = np.array([[0, -1j], [1j, 0]])
sigma_z = np.array([[1, 0], [0, -1]])

# Define standard state vectors.
ket_xp = np.array([, ]) / np.sqrt(2)
ket_yp = np.array([, [1j]]) / np.sqrt(2)
ket_zp = np.array([, ])

# Define parameters to be estimated.
actual_Omegas = (2 * np.pi) * np.array([0.5e6, 1.5e6, 1.8e6])  # Hz
Omega_names = [r"$\Omega_x$", r"$\Omega_y$", r"$\Omega_z$"]
def run_experiments(wait_times, initial_states, projector_states):
"""
Runs a batch of simulated experiments for the given set of wait_times,
initial_states, and projector_states, whose population is measured at
the end.
"""
# Graph for system simulation.
graph = qctrl.create_graph()

# Calculate the dynamics of the system.
hamiltonian = graph.constant_pwc(
constant=(
actual_Omegas * sigma_x
+ actual_Omegas * sigma_y
+ actual_Omegas * sigma_z
)
/ 2,
duration=max(wait_times),
)
unitaries = graph.time_evolution_operators_pwc(
hamiltonian=hamiltonian, sample_times=wait_times
)

# Calculate populations.
populations = graph.reshape(
graph.abs(
@ unitaries
@ initial_states[:, None, ...]
)
** 2,
(len(initial_states), len(wait_times)),
)

# Add Gaussian error to the measurement result.
measurement_error = graph.random_normal(
shape=populations.shape, mean=0, standard_deviation=0.01
)
populations, measurement_error, name="measurement_results"
)

result = qctrl.functions.calculate_graph(
graph=graph, output_node_names=["measurement_results"]
)

return result.output["measurement_results"]["value"]
# Range of wait times in the experiments.
duration = 500e-9  # s
time_segment_count = 20
time_resolution = duration / time_segment_count
wait_times = np.linspace(time_resolution, duration, time_segment_count)

# List of initial state preparations.
initial_states = np.array([ket_zp, ket_xp, ket_yp])

# List of states whose population will be measured at the end.
projector_states = np.array([ket_yp, ket_zp, ket_xp])

measurement_results = run_experiments(wait_times, initial_states, projector_states)

# Graph for system identification. This graph encapsulates the dynamics of
# the system and uses the experimental data to estimate the parameters.
graph = qctrl.create_graph()

# Parameters to be estimated. Frequencies whose half-periods are shorter than
# the smaller spacing between points are out of bounds.
frequency_bound = 1 / time_resolution
Omegas = graph.optimization_variable(
3, lower_bound=-frequency_bound, upper_bound=frequency_bound, name="Omegas"
)

# Create Hamiltonian.
hamiltonian = graph.constant_pwc(
constant=(Omegas * sigma_x + Omegas * sigma_y + Omegas * sigma_z) / 2,
duration=duration,
)

# Calculate the projected populations.
unitaries = graph.time_evolution_operators_pwc(
hamiltonian=hamiltonian, sample_times=wait_times
)
populations = graph.reshape(
graph.abs(
@ unitaries
@ initial_states[:, None, ...]
)
** 2,
(len(initial_states), len(wait_times)),
)

# Calculate residual sum of squares as the cost.
cost = graph.sum((populations - measurement_results) ** 2, name="rss")

# Calculate Hessian of the negative log likelihood.
hessian = graph.hessian(cost, [Omegas], name="hessian")
# Estimate the parameters.
result = qctrl.functions.calculate_optimization(
)

# Extract values from the calculation.
estimated_Omegas = result.output["Omegas"]["value"]
output_hessian = result.output["hessian"]["value"]
Your task calculate_optimization (action_id="816435") has started. You can use the qctrl.get_result method to retrieve previous results.

p = len(Omega_names)
n = measurement_results.shape * measurement_results.shape

# Estimate covariance matrix from the Hessian.
covariance_matrix = np.linalg.inv(output_hessian / (2 * rss / (n - p)))

# Calculate scaling factor for the confidence region. For more details, see the
# topic "Characterizing your hardware using system identification in Boulder Opal"
# and N. R. Draper and I. Guttman, The Statistician 44, 399–403 (1995).
alpha = 0.95
inverse_cdf = (n - p) / p * (1 / betaincinv((n - p) / 2, p / 2, 1 - alpha) - 1)
scaling_factor = np.sqrt(p * inverse_cdf)

# Calculate confidence region for 95% probability.
confidence_region = scaling_factor * fractional_matrix_power(covariance_matrix, 0.5)
figure, axes = plt.subplots(1, 3, figsize=(20, 6))
figure.suptitle("Estimated Hamiltonian parameters", x=0.5, y=1.05)

for index, (index_1, index_2) in enumerate([[0, 1], [0, 2], [1, 2]]):
# Obtain points representing the correct parameters and their estimates.
estimated_dot = estimated_Omegas[[index_1, index_2]] / 1e6
actual_dot = actual_Omegas[[index_1, index_2]] / 1e6

# Obtain coordinates for a circle.
theta = np.linspace(0, 2 * np.pi, 100)
circle_coordinates = np.array([np.cos(theta), np.sin(theta)])

# Define matrix that transforms circle coordinates into ellipse coordinates.
coordinate_change = (
confidence_region[np.ix_([index_1, index_2], [index_1, index_2])] / 1e6
)
ellipse = coordinate_change @ circle_coordinates + estimated_dot[:, None]

# Define labels of the axes.

# Format ticks to keep only two signficant digits after the decimal point.
axes[index].xaxis.set_major_formatter(tick.FuncFormatter(lambda x, _: f"{x:.2f}"))
axes[index].yaxis.set_major_formatter(tick.FuncFormatter(lambda y, _: f"{y:.2f}"))

# Plot results.
axes[index].plot(*estimated_dot, "o", label="Estimated parameters")
axes[index].plot(*actual_dot, "o", label="Actual parameters")
axes[index].plot(*ellipse, "--", label="95% confidence region")

# Create legends.
handles, labels = axes.get_legend_handles_labels()
figure.legend(
handles=handles, labels=labels, loc="center", bbox_to_anchor=(0.5, 0.95), ncol=3
)

plt.show() 