How to optimize controls on large sparse Hamiltonians

Efficiently perform control optimization on sparse Hamiltonians

The Q-CTRL Python package exposes a highly-flexible optimization engine for general-purpose gradient-based optimization. It can be directly applied to model-based control optimization for arbitrary-dimensional quantum systems.

This optimization engine provides best-in-class time to solution, but the efficiency of the optimization will always decline with the system size. In cases where the Hamiltonian to be treated is sparse it is possible to recover performance via specialized methods. In this notebook we demonstrate the use of sparse-Hamiltonian methods for efficient optimizations in large systems.

Summary workflow

1. Use sparse-matrix methods in the computational graph

The flexible Q-CTRL optimization engine expresses all optimization problems as data flow graphs, which describe how optimization variables (variables that can be tuned by the optimizer) are transformed into the cost function (the objective that the optimizer attempts to minimize).

You can carry out the optimization of larger systems by using approximate integration methods that are more computationally effective than standard approaches. This is especially helpful when the Hamiltonian is sparse.

Here you can define system Hamiltonian terms as sparse matrices (as the coo_matrix object from the scipy package) and integrate them using the Lanczos algorithm that is available in the function qctrl.operations.state_evolution_pwc.The Lanczos algorithm accepts a parameter called the krylov_subspace_dimension, which controls the accuracy of the operation. You can choose this parameter manually or use the operation qctrl.operations.estimated_krylov_subspace_dimension_lanczos to obtain an estimate of its best choice for your Hamiltonian and error tolerance.

All of these calculations are performed within the optimization graph.

2. Execute graph-based optimization

With the graph object created, an optimization can be run using the qctrl.functions.calculate_optimization function. The cost, the outputs, and the graph must be provided. The function returns the results of the optimization. Note this example code block uses naming that should be replaced with the naming used in your graph.

import matplotlib.pyplot as plt
import numpy as np
from qctrlvisualizer import get_qctrl_style, plot_controls

from qctrl import Qctrl

plt.style.use(get_qctrl_style())

# Sparse matrix imports
from scipy.sparse import coo_matrix

# Starting a session with the API
qctrl = Qctrl()

Worked example: Optimized controls in a nearest-neighbor-coupled qubit chain

To illustrate this, consider a system of four qubits where each qubit only interacts with the nearest neighbor through an XX coupling operator:

$$ H = \frac{1}{2} \sum_{i=1}^3 \Omega_i(t) \sigma_{x,i} \sigma_{x,i+1}. $$

Suppose you start from the state $\left| 0000 \right\rangle$ and want to create the state $\left(\left| 0000 \right\rangle +i\left| 0011 \right\rangle -i\left| 1100 \right\rangle+ \left| 1111 \right\rangle \right)/2$ by using optimized controls for $\Omega_1(t)$, $\Omega_2(t)$, and $\Omega_3(t)$.

The following code uses spare-matrix techniques and Lanczos algorithm to obtain an estimate for an integration accuracy of 1e-5 and then how to optimize the controls using it. The code also prints a comparison between the integration that uses the Lanczos algorithm and the exact integration, as well as the optimized controls.

# Define standard matrices
sigma_x = np.array([[0, 1], [1, 0]], dtype=np.complex128)

# Define physical constraints
omega_max = 2 * np.pi * 5e6  # Hz
duration = 500e-9  # s
segment_count = 20
sample_times = np.linspace(0, duration, 100)
initial_state = np.array([1] + [0] * 15)
target_state = np.zeros(2 ** 4, complex)
target_state[int("0000", 2)] = 1 / 2
target_state[int("0011", 2)] = 1j / 2
target_state[int("1100", 2)] = -1j / 2
target_state[int("1111", 2)] = 1 / 2

# Define coupling operators as sparse matrices
operators = [
    0.5 * coo_matrix(np.kron(np.kron(sigma_x, sigma_x), np.eye(4))),
    0.5 * coo_matrix(np.kron(np.kron(np.eye(2), np.kron(sigma_x, sigma_x)), np.eye(2))),
    0.5 * coo_matrix(np.kron(np.eye(4), np.kron(sigma_x, sigma_x))),
]

# Build Krylov subspace dimension estimation graph
with qctrl.create_graph() as graph:

    # Find the spectral range for the Hamiltonian with highest omega value
    spectral_range = qctrl.operations.spectral_range(
        operator=omega_max * sum(operators).tocoo()
    )

    # Find the suggested Krylov subspace dimension for that spectral range
    # and error tolerance of 1e-5
    qctrl.operations.estimated_krylov_subspace_dimension_lanczos(
        spectral_range=spectral_range,
        duration=duration,
        maximum_segment_duration=duration / segment_count,
        error_tolerance=1e-5,
        name="krylov_subspace_dimension",
    )

# Run the calculation for the recommended Krylov subspace dimension
krylov_subspace_dimension = qctrl.functions.calculate_graph(
    graph=graph, output_node_names=["krylov_subspace_dimension"]
).output["krylov_subspace_dimension"]["value"]

# Build optimization graph
with qctrl.create_graph() as graph:

    # Define individual control signals (Omegas)
    signals = [
        qctrl.operations.pwc_signal(
            values=qctrl.operations.optimization_variable(
                segment_count, -omega_max, omega_max
            ),
            duration=duration,
            name=fr"$\Omega_{n}$",
        )
        for n in [1, 2, 3]
    ]

    # Define individual terms of the Hamiltonian
    hamiltonian_terms = [
        qctrl.operations.sparse_pwc_operator(signal=signal, operator=operator)
        for signal, operator in zip(signals, operators)
    ]

    # Create total Hamiltonian by adding terms
    hamiltonian = qctrl.operations.sparse_pwc_sum(hamiltonian_terms)

    # Integrate the Schrödinger equation for the Hamiltonian
    evolved_states = qctrl.operations.state_evolution_pwc(
        initial_state=initial_state,
        hamiltonian=hamiltonian,
        krylov_subspace_dimension=krylov_subspace_dimension,
        sample_times=sample_times,
    )

    # Calculate the final state infidelity and use it as the cost
    final_state = evolved_states[-1]
    final_state_fidelity = (
        qctrl.operations.abs(qctrl.operations.sum(final_state * target_state.conj()))
        ** 2
    )
    final_state_infidelity = qctrl.operations.abs(1 - final_state_fidelity, name="cost")

    # Calculate the infidelities as a function of time
    state_fidelities = (
        qctrl.operations.abs(evolved_states @ target_state.conj()[:, None]) ** 2
    )
    state_infidelities = qctrl.operations.abs(1 - state_fidelities, name="infidelities")

    # Calculate the evolution using a dense Hamiltonian, for comparison
    dense_hamiltonian_terms = [
        signal * operator.toarray() for signal, operator in zip(signals, operators)
    ]
    dense_hamiltonian = qctrl.operations.pwc_sum(dense_hamiltonian_terms)
    time_evolution_operators = qctrl.operations.time_evolution_operators_pwc(
        hamiltonian=dense_hamiltonian, sample_times=sample_times
    )
    dense_evolved_states = time_evolution_operators @ initial_state[:, None]
    dense_fidelities = (
        qctrl.operations.abs(target_state.conj()[None, :] @ dense_evolved_states) ** 2
    )
    dense_infidelities = qctrl.operations.abs(
        1 - dense_fidelities, name="dense_infidelities"
    )


# Optimize with recommended choice of Krylov subspace dimension
result = qctrl.functions.calculate_optimization(
    graph=graph,
    cost_node_name="cost",
    output_node_names=["infidelities", "dense_infidelities"]
    + [fr"$\Omega_{n}$" for n in [1, 2, 3]],
    optimization_count=5,
)

# Calculate infidelities as a function of time
infidelities = result.output.pop("infidelities")["value"].flatten()
dense_infidelities = np.array(
    result.output.pop("dense_infidelities")["value"]
).flatten()

# Plot and print results
plt.plot(sample_times / 1e-9, infidelities, "--", label="Lanczos integration")
plt.plot(sample_times / 1e-9, dense_infidelities, label="Exact integration")
plt.legend()
plt.xlabel("Time (ns)")
plt.ylabel("State infidelity")

print(f"\nRecommended Krylov subspace dimension:\t{krylov_subspace_dimension}")
print(f"Final state infidelity:\t\t\t{result.cost:.3e} (Lanczos integration)")
print(f"Final state infidelity:\t\t\t{dense_infidelities[-1]:.3e} (Exact integration)")

plot_controls(plt.figure(), result.output)
Your task calculate_graph has completed.

Your task calculate_optimization has started.
Your task calculate_optimization has completed.

Recommended Krylov subspace dimension:	10
Final state infidelity:			3.775e-15 (Lanczos integration)
Final state infidelity:			1.155e-13 (Exact integration)

Comparison between the time evolution of the infidelity using the Lanczos and exact integrations (top). The difference between them is well within the 1e-5 accuracy. Optimized pulses obtained to generate the target state (bottom).