How to perform parameter estimation with a large amount of data

Estimate Hamiltonian model parameters using measured data and the graph-based stochastic optimization engine

The stochastic optimization engine from Boulder Opal provides a large, modular collection of configuration options that allows it to be used for a range of tasks in quantum control, including estimating Hamiltonian parameters in a model using measured data.

In this notebook, we show how this engine can be used to determine the parameters that characterize aspects of a hardware system using a large amount of measured data. You can read our Characterizing your hardware using system identification in Boulder Opal topic to learn more about parameter estimation, as well as which optimization engine to use depending on your system size or the amount of measured data you have.

Summary workflow

1. Perform measurements to probe the system

In general the output of probe measurements will be estimates of various parameters, such as entries in a tomographic reconstruction of a state. Suitable probe pulses must be crafted which give access to terms relevant to a model being captured in order to constrain the optimization search space.

2. Build a graph-based stochastic optimization encoding the problem

Represent the cost function to be minimized using a graph. Define optimization parameters in the graph using the graph.optimization_variable operation and sample your data using graph.random.choices. Execute the stochastic optimization using boulderopal.run_stochastic_optimization by assigning the optimization variables to output nodes of the graph.

Example: Identifying parameters in a multi-qubit Hamiltonian

In this example, we will consider how to estimate some constant parameters in a multi-qubit Hamiltonian. By preparing the system in its ground state, letting it evolve under different time-dependent pulses, and measuring, we can obtain information about the system parameters.

We consider a $N=5$ qubit system whose Hamiltonian contains coupling, detuning, and interaction terms: \begin{equation} H = \alpha \, \Omega(t) \sum_{k=1}^N \sigma_x^{(k)} + \delta \sum_{k=1}^N \sigma_z^{(k)} + \gamma \bigotimes_{k=1}^N \sigma_z^{(k)} , \end{equation} where $\sigma_i^{(k)}$ is the $i$ Pauli matrix acting on the $k$-th qubit, $\alpha \, \Omega(t)$ is a Rabi coupling due to external pulses, $\delta$ is the detuning in each qubit, and $\gamma$ parametrizes the interaction strength between qubits. The Rabi coupling, $\alpha \, \Omega(t)$ has two components, namely unitless input values that the experimentalist is able to change ($\Omega(t)$) and a scaling factor relating these to the physical coupling occurring in the system ($\alpha$).

The task of the optimizer in this case will be to determine the parameters $\alpha$, $\delta$, and $\gamma$. It can achieve this by relying on experimental data obtained by applying $\Omega(t)$ pulses and observing the change in ground state population, which will depend on the Hamiltonian parameters to be estimated. The sets of measured points are then provided to the optimization engine (together with the pulses used to generate them) which finds the parameters most likely to have generated that series of points.

In this example we will rely on a large set of simulated measurement data ($\Omega(t)$ signals and corresponding final ground state populations), which we generate first. With the system initially in the ground state, $|0\ldots0\rangle$, we will send piecewise-constant controls for $\Omega(t)$ with a fixed duration of 10 µs. After the evolution, we measure an approximate ground state population by averaging over a large set of shot_count projective measurements on the final state.

import numpy as np
import boulderopal as bo
# Parameters to estimate
actual_alpha = 2 * np.pi * 0.83e6  # Hz
actual_delta = 2 * np.pi * 0.31e6  # Hz
actual_gamma = 2 * np.pi * 0.11e6  # Hz

# System parameters
qubit_count = 5

# Measurement parameters
shot_count = 100  # number of projective measurements shots to take
measurement_error = 1.0 / shot_count

# Dataset parameters
dataset_size = 5000  # Size of the entire dataset
batch_size = 100  # Size for batch to use at each the stochastic optimizer iteration

# Pulses parameters
duration = 10.0e-6  # s
segment_count = 10  # number of piecewise-constant segments
# Generate values for all Rabi couplings pulses in the dataset, shape [D=5000,T=10]
omega_dataset = np.random.uniform(low=0.0, high=1.0, size=[dataset_size, segment_count])

# Define graph to simulate ground-state populations
graph = bo.Graph()

# Define initial state |00000>
initial_state = graph.fock_state(2**qubit_count, 0)

# Create signal, batch of D pwc signals
omega_signal = graph.pwc_signal(values=omega_dataset, duration=duration)

# Create Hamiltonian operators

# Add (Id₂ ⊗ … ⊗ Id₂) ⊗ σx ⊗ (Id₂ ⊗ … ⊗ Id₂) terms
rabi_coupling_operator = sum(
    graph.pauli_kronecker_product([("X", k)], qubit_count) for k in range(qubit_count)
)

# Add (Id₂ ⊗ … ⊗ Id₂) ⊗ σz ⊗ (Id₂ ⊗ … ⊗ Id₂) terms
detuning_operator = sum(
    graph.pauli_kronecker_product([("Z", k)], qubit_count) for k in range(qubit_count)
)

# Product of all σz operators
interaction_operator = graph.pauli_kronecker_product(
    [("Z", i) for i in range(qubit_count)], qubit_count
)

# Create Hamiltonian, batch of D [N,N] pwc operators
hamiltonian = (
    omega_signal * actual_alpha * rabi_coupling_operator
    + actual_delta * detuning_operator
    + actual_gamma * interaction_operator
)

# Calculate final unitary evolution operator, shape=[D,N,N]
unitary = graph.time_evolution_operators_pwc(
    hamiltonian=hamiltonian, sample_times=np.array([duration])
)[:, -1]

# Evolve intial state, shape=[D,N,1]
final_state = unitary @ initial_state[:, None]

# Calculate final ground-state populations, shape=[D,N]
populations = graph.abs(final_state[:, :, 0]) ** 2
populations.name = "populations"

# Execute the graph and extract the populations
result = bo.execute_graph(graph=graph, output_node_names="populations")
calculated_populations = result["output"]["populations"]["value"]

# Take shot_count projective measurements of the system.
measured_populations = []
for pops in calculated_populations:
    measurements = list(np.random.choice(2**qubit_count, size=shot_count, p=pops))
    measured_populations.append(measurements.count(0) / shot_count)

measurement_dataset = np.array(measured_populations)
Your task (action_id="1829526") has started.
Your task (action_id="1829526") has completed.

Estimating the parameters of the Hamiltonian

The parameter estimation for the Hamiltonian terms is performed by finding the form of the evolution that most closely matches the measured data points. The optimization will seek to minimize a cost function parameterized by $\alpha$, $\delta$ and $\gamma$:

\begin{equation} C = \sum_i \frac{[P_i-p_i(\alpha, \delta, \gamma)]^2}{2(\Delta P_i)^2} . \end{equation}

Doing so returns the best choice of parameters that generates the original dynamics of the system, and also allows us to calculate the precision of the estimated parameters. This is done by using the Cramér–Rao bound to identify the Hessian of the cost function with the inverse of the covariance matrix for the variables estimated.

The graph used for the optimization is very similar to the one we created to simulate the evolution and generate the measurement data, but with some key differences:

  • we define optimization variables for the parameters to be estimated with graph.optimizable_scalar.
  • we select a different subset of the whole data set at each optimization iteration by using graph.random.choices.
  • we define a cost node with the negative log-likelihood (and its Hessian).
graph = bo.Graph()

# Define initial state |00000>
initial_state = graph.fock_state(2**qubit_count, 0)

# Sample a batch of the omega/measurement dataset
(omega_values_batch, measurement_batch) = graph.random.choices(
    data=[omega_dataset, measurement_dataset], sample_count=batch_size
)

# Parameters to be estimated
alpha = graph.optimizable_scalar(lower_bound=0, upper_bound=10.0e6, name="alpha")
delta = graph.optimizable_scalar(lower_bound=0, upper_bound=5.0e6, name="delta")
gamma = graph.optimizable_scalar(lower_bound=0, upper_bound=2.0e6, name="gamma")

# Create signal, batch of D pwc signals
omega_signal = graph.pwc_signal(values=omega_values_batch, duration=duration)

# Create Hamiltonian operators

# Add (Id₂ ⊗ … ⊗ Id₂) ⊗ σx ⊗ (Id₂ ⊗ … ⊗ Id₂) term
rabi_coupling_operator = sum(
    graph.pauli_kronecker_product([("X", k)], qubit_count) for k in range(qubit_count)
)

# Add (Id₂ ⊗ … ⊗ Id₂) ⊗ σz ⊗ (Id₂ ⊗ … ⊗ Id₂) term
detuning_operator = sum(
    graph.pauli_kronecker_product([("Z", k)], qubit_count) for k in range(qubit_count)
)

# Product of all σz operators
interaction_operator = graph.pauli_kronecker_product(
    [("Z", i) for i in range(qubit_count)], qubit_count
)

# Create Hamiltonian, batch of D N×N pwc operators
hamiltonian = (
    omega_signal * alpha * rabi_coupling_operator
    + delta * detuning_operator
    + gamma * interaction_operator
)

# Calculate final unitary evolution operator, shape=[D,N,N]
unitary = graph.time_evolution_operators_pwc(
    hamiltonian=hamiltonian, sample_times=np.array([duration])
)[:, -1]

# Evolve intial state, shape=[D,N,1]
state = unitary @ initial_state[:, None]

# Calculate final ground-state populations, shape=[D,N]
populations = graph.abs(state[:, :, 0]) ** 2
calculated_measurements = populations[:, 0]

# Create the negative log-likelihood cost node
cost = 0.5 * graph.sum(
    ((calculated_measurements - measurement_batch) / measurement_error) ** 2.0
)
cost.name = "cost"

# Calculate Hessian
hessian = graph.hessian(cost, [alpha, delta, gamma], name="hessian")

We can finally execute the optimization by passing the graph to boulderopal.run_stochastic_optimization, along the names of the cost node and those nodes whose values we want to retrieve. In order to avoid retrieving a local minimum, we run five optimizations and then take the one yielding the best result.

with bo.cloud.group_requests():
    results = [
        bo.run_stochastic_optimization(
            graph=graph,
            cost_node_name="cost",
            output_node_names=["alpha", "delta", "gamma", "hessian"],
        )
        for _ in range(5)
    ]

result = min(results, key=lambda r: r["cost"])
Your task (action_id="1829532") has started.
Your task (action_id="1829532") has completed.
Your task (action_id="1829533") has completed.
Your task (action_id="1829534") has started.
Your task (action_id="1829534") has completed.
Your task (action_id="1829537") has completed.
Your task (action_id="1829540") has completed.

We can now retrieve the estimated values of the Hamiltonian parameters from the optimization results, along with the Hessian of the cost function, which allows us to calculate the uncertainties associated to the estimates.

# Calculate 2-sigma uncertainties (with 95% precision)
hessian = result["output"]["hessian"]["value"]
uncertainties = 2.0 * np.sqrt(np.diag(np.linalg.inv(hessian)))
alpha_uncertainty, delta_uncertainty, gamma_uncertainty = uncertainties

# Print parameter estimates
print(f"alpha:\t   actual =  {actual_alpha / 1e6:.4f} MHz")
print(
    f"\testimated = ({result['output']['alpha']['value'] / 1e6:.4f} "
    f"± {alpha_uncertainty / 1e6:.4f}) MHz"
)
print()
print(f"delta:\t   actual =  {actual_delta / 1e6:.4f} MHz")
print(
    f"\testimated = ({result['output']['delta']['value'] / 1e6:.4f} "
    f"± {delta_uncertainty / 1e6:.4f}) MHz"
)
print()
print(f"gamma:\t   actual =  {actual_gamma / 1e6:.4f} MHz")
print(
    f"\testimated = ({result['output']['gamma']['value'] / 1e6:.4f} "
    f"± {gamma_uncertainty / 1e6:.4f}) MHz"
)
alpha:	   actual =  5.2150 MHz
	estimated = (5.2142 ± 0.0026) MHz

delta:	   actual =  1.9478 MHz
	estimated = (1.9536 ± 0.0021) MHz

gamma:	   actual =  0.6912 MHz
	estimated = (0.6914 ± 0.0045) MHz

The three values we have estimated for $\alpha$, $\delta$, and $\gamma$ correspond to the actual parameters present in the Hamiltonian defined above. The errors in the parameter estimates correspond to 2 times the standard deviation and thus have 95% reliability.

Was this useful?

cta background

New to Boulder Opal?

Get access to everything you need to automate and optimize quantum hardware performance at scale.