# How to simulate large open system dynamics

Calculate the dynamics of a high-dimensional quantum system described by a GKS–Lindblad master equation

The Q-CTRL Python package enables you to simulate the evolution of open quantum systems that are interacting with their environment.

For larger systems, with Hilbert space dimensions more than roughly 10, computing the exact evolution can be prohibitively expensive. Instead, you can get better performance by using an approximate method to calculate the evolution. You can express the dynamics of these open quantum systems in terms of a master equation, such as the one described by Gorini, Kossakowski, and Sudarshan (GKS) and Lindblad. The tools described in this notebook allow you to solve this GKS–Lindblad master equation to obtain the time evolution for large systems using approximate methods.

## Summary workflow

### 1. Calculate the evolution of a density matrix

The Q-CTRL Python package provides the density_matrix_evolution_pwc graph operation to obtain the solution of the GKS–Lindblad master equation at some sample times for a given piecewise-constant Hamiltonian and Lindblad operators. This function makes it very easy to switch between the exact and approximate methods, simply by passing the error_tolerance parameter to it. This parameter controls the precision of the method; a smaller value leads to a more accurate solution, but might take longer computation time. However, setting it to a too small value (for example below 1e-12) would not further improve the precision, since the dominating error in that case is due to floating point error. A recommended value is around 1e-7. Note that increasing the number or density of requested sample times will not affect the precision of the solution.

### 2. Use sparse matrices (recommended)

When you use the error_tolerance parameter, you have the option of setting up your system with sparse matrices. Using sparse matrices can lead to performance improvements if your Hamiltonian and collapse operators contain mostly zeros.

## Worked example: Superconducting resonator coupled to transmon qubits

In this example, we consider a superconducting qubit coupled to a transmission line resonator (truncated to $10$ levels), with a drive on the resonator. We show how to simulate a measurement process that uses the quadratures of the resonator output to probe the qubit state.

Assuming that the qubit is far detuned from the resonator (the dispersive approximation), and that the resonator drive is off resonance from the qubit transition frequency, in the absence of losses the system can be described by the Hamiltonian (see Gambetta et al. and Blais et al.):

$$H(t) = \frac{\omega_q+g^2/\Delta}{2} \sigma_z + (\omega_r-\omega_d) a^\dagger a + \frac{g^2}{\Delta} a^\dagger a \sigma_z + \Gamma(t) a^\dagger + \Gamma^*(t) a,$$

where $a$ is the annihilation operator for the resonator, $\sigma_z$ is the Pauli Z matrix for the qubit, $\Gamma(t)$ is the resonator drive, $\omega_r$ is the resonator frequency, $\omega_q$ is the qubit transition frequency, $\omega_d$ is the resonator drive frequency, $\Delta=\omega_q-\omega_r$ is the qubit detuning from the resonator, and $g$ is the coupling strength.

The losses in the system can be described by two decoherent terms: the resonator photon loss $\kappa$ (with associated Lindlbad operator $a$) and the qubit decay $\gamma$ (with associated Lindlbad operator $\sigma_-$), leading to the GKS–Lindblad equation describing the system evolution:

$$\frac{\mathrm{d} \rho(t)}{\mathrm{d} t} = - i \left[ H(t), \rho(t) \right] + \kappa \left[ a \rho(t) a^\dagger - \frac{1}{2} \rho(t) a^\dagger a - \frac{1}{2} a^\dagger a \rho(t) \right] + \gamma \left[ \sigma_- \rho(t) \sigma_+ - \frac{1}{2} \rho(t) \sigma_+ \sigma_- - \frac{1}{2} \sigma_+\sigma_- \rho(t) \right],$$

where $\sigma_{\pm} = (\sigma_x \mp i\sigma_y)/2$.

Below we show how you can simulate this system using Boulder Opal. Note that similar code can be used to perform optimizations, for example if you want to design custom measurement pulses optimized for your particular system.

import matplotlib.pyplot as plt
import numpy as np
from scipy.sparse import coo_matrix
from qctrlvisualizer import get_qctrl_style

from qctrl import Qctrl

plt.style.use(get_qctrl_style())

# Start a session with the API
qctrl = Qctrl()

# Define standard matrices.
identity_q = np.identity(2, dtype=complex)
sigma_z = np.array([[1.0, 0.0], [0.0, -1.0]], dtype=complex)
sigma_m = np.array([[0.0, 1.0], [0.0, 0.0]], dtype=complex)

n_r = 10  # 10 levels in the resonator
identity_r = np.identity(n_r, dtype=complex)
a = np.diag(np.sqrt(np.arange(1, n_r)), k=1)

# Define system parameters.
omega_r = 2 * np.pi * 10 * 1e9  # Hz
omega_q = 2 * np.pi * 8 * 1e9  # Hz
omega_d = 2 * np.pi * 10 * 1e9  # Hz
delta = omega_q - omega_r
g = 2 * np.pi * 0.05 * 1e9  # Hz
kappa = 2 * np.pi * 1e6  # Hz
gamma = 2 * np.pi * 1e3  # Hz
duration = 100 * 1e-9  # s

# Define drive parameters.
drive_amplitude = 2 * np.pi * 0.02 * 1e9  # Hz
drive_rise_duration = 20 * 1e-9  # s
segment_count = 1000

# Set up Hamiltonian. We use the coo_matrix function from SciPy to
# convert the matrix to a sparse matrix, which can lead to faster
# computations for large systems.
base_hamiltonian_value = coo_matrix(
(omega_q + g ** 2 / delta) / 2 * np.kron(identity_r, sigma_z)
+ (omega_r - omega_d) * np.kron(a.T.conj() @ a, identity_q)
+ g ** 2 / delta * np.kron(a.T.conj() @ a, sigma_z)
)
drive_hamiltonian_value = coo_matrix(
drive_amplitude * np.kron(a.T.conj() + a, identity_q)
)

(kappa, coo_matrix(np.kron(a, identity_q))),
(gamma, coo_matrix(np.kron(identity_r, sigma_m))),
]

# Set up density matrices for the qubit in |0> and |1> states.
psi_0 = np.kron([1] + [0] * (n_r - 1), [1, 0])
rho_0 = np.outer(psi_0, psi_0)
psi_1 = np.kron([1] + [0] * (n_r - 1), [0, 1])
rho_1 = np.outer(psi_1, psi_1)

# Set up observables for the quadratures of the resonator field.
I = np.kron(a.T.conj() + a, identity_q)
Q = np.kron(-1j * (a - a.T.conj()), identity_q)

# Run the simulation.
graph = qctrl.create_graph()

base_hamiltonian = graph.constant_sparse_pwc_operator(
duration=duration, operator=base_hamiltonian_value
)

# Prepare a tanh envelope for the drive signal.
drive_envelope_times = np.linspace(0, duration, segment_count)
drive_envelope_values = np.tanh(drive_envelope_times / drive_rise_duration)
drive_envelope = graph.pwc_signal(duration=duration, values=drive_envelope_values)

drive_hamiltonian = graph.sparse_pwc_operator(
signal=drive_envelope, operator=drive_hamiltonian_value
)

density_matrices = graph.density_matrix_evolution_pwc(
initial_density_matrix=np.array([rho_0, rho_1]),
hamiltonian=graph.sparse_pwc_sum([base_hamiltonian, drive_hamiltonian]),
sample_times=np.linspace(0, duration, 500),
error_tolerance=1e-7,
)

I_values_0 = graph.trace(density_matrices[0] @ I, name="I_0")
Q_values_0 = graph.trace(density_matrices[0] @ Q, name="Q_0")
I_values_1 = graph.trace(density_matrices[1] @ I, name="I_1")
Q_values_1 = graph.trace(density_matrices[1] @ Q, name="Q_1")

result = qctrl.functions.calculate_graph(
graph=graph, output_node_names=["I_0", "Q_0", "I_1", "Q_1"]
)

Your task calculate_graph (action_id="618273") has started. You can use the qctrl.get_result method to retrieve previous results.


fig, axs = plt.subplots(1, 2, figsize=(15, 5))

sample_times = np.linspace(0, duration, 500) * 1e9

axs[0].plot(sample_times, np.real(result.output["I_0"]["value"]), label=r"$|0\rangle$")
axs[0].plot(sample_times, np.real(result.output["I_1"]["value"]), label=r"$|1\rangle$")
axs[0].set_xlabel("Time (ns)")
axs[0].set_ylabel("I (arb. units)")

axs[1].plot(sample_times, np.real(result.output["Q_0"]["value"]), label=r"$|0\rangle$")
axs[1].plot(sample_times, np.real(result.output["Q_1"]["value"]), label=r"$|1\rangle$")