Algorithmic benchmarking results

Evaluating Fire Opal performance improvement across multiple benchmark algorithms

Introduction

Benchmarks are a common industry practice to determine the performance of quantum computing systems. The usefulness of a quantum computer is ultimately determined by its ability to successfully implement meaningful and relevant quantum algorithms in reasonable times and with reasonable resources.

Fire Opal boosts algorithmic success on quantum computers, and makes it possible to run algorithms on NISQ devices. This article provides an overview of the benchmarked improvements of Fire Opal run on IBM backends, compared to bare execution without Fire Opal.

At Q-CTRL, benchmarking is performed continuously to test the performance of new error suppression methods. This document reveals some of the latest benchmarking results achieved across different algorithms, which are adapted from the QED-C Application-Oriented Performance Benchmarks for Quantum Computing. These benchmarking results can help to set expectations for what sorts of improvements you will see with Fire Opal, but it is still highly encouraged to try it out yourself as Fire Opal's error suppression capabilities are constantly improving.

Benchmarks were carried out across a set of algorithms with widely varying characteristics (connectivity, circuit depth, etc): Bernstein–Vazirani (BV), quantum Fourier transform (QFT), and quantum approximate optimization algorithm (QAOA). Certain algorithms are inherently better suited for certain devices, due to device connectivity and layouts, but Fire Opal generally boosts algorithmic success probability across devices and algorithms.

Benchmarking Results

Published algorithmic benchmarking results demonstrate significant performance improvement across various algorithms, including Bernstein-Vazirani, quantum Fourier transform, Grover’s search, quantum approximate optimization algorithm, and variational quantum eigensolver. The rest of this section provides more details about types of algorithms you can run, as well as the expected performance and runtimes.

The following independent studies demonstrate how Q-CTRL's Performance Management enables algorithmic research at record-breaking scale:

Deterministic quantum algorithms

The following table provides a rough guide on accuracy and runtimes from prior benchmarking runs on ibm_fez. Performance on other devices may vary. The usage time is based on an assumption of 10,000 shots per circuit. The "Number of qubits" indicated is not a hard limitation but represents rough thresholds where you can expect extremely consistent solution accuracy. Larger problem sizes have been successfully solved, and testing beyond these limits is encouraged.

ExampleNumber of qubitsAccuracyMeasure of accuracyTotal time (s)Runtime usage (s)
Bernstein–Vazirani50Q100%Success Rate (Percentage of runs where the correct answer is the highest count bitstring)108
Quantum Fourier Transform30Q100%Success Rate (Percentage of runs where the correct answer is the highest count bitstring)108
Quantum Phase Estimation30Q99.9998%Accuracy of the angle found: 1- abs(real_angle - angle_found)/pi108
Quantum simulation: Ising model (15 steps)20Q99.775%$A$ (see following definition)60 (per step)15 (per step)
Quantum simulation 2: molecular dynamics (20 time points)34Q96.78%$A_{mean}$ (see following definition)10 (per time point)6 (per time point)

Defining the accuracy of the measurement of an expectation value - the metric $A$ is defined as follows: \begin{equation} A = 1 - \frac{|\epsilon^{ideal} - \epsilon^{meas}|}{\epsilon^{ideal}_{max} - \epsilon^{ideal}_{min}}, \end{equation} where $ \epsilon^{ideal} $ = ideal expectation value, $ \epsilon^{meas} $ = measured expectation value, $\epsilon^{ideal}_{max} $ = ideal maximum value, and $\epsilon^{ideal}_{min}$ = ideal minimum value. $A_{mean}$ refers to the average of the value of $A$ across multiple measurements.

This metric is used because it is invariant to global shifts and scaling in the range of attainable values. In other words, regardless of whether you shift the range of possible expectation values higher or lower or increase the spread, the value of $A$ should remain consistent.

Optimization problems

The following benchmarking results are achieved through Fire Opal's QAOA Solver, which fully optimizes and automates the entire algorithm, from error suppression at the hardware level to efficient problem mapping and closed-loop classical optimization. Behind the scenes, the Solver's pipeline reduces errors at every stage, enabling the enhanced performance required to meaningfully scale. The underlying workflow is inspired by the Quantum Approximate Optimization Algorithm (QAOA), which is a hybrid quantum-classical algorithm. For a detailed summary of the full Optimization Solver workflow, refer to the published manuscript.

The following benchmark metrics provide a rough indication of the accuracy and scaling of problem types based on prior benchmarking runs on ibm_fez. Actual metrics may differ based on various problem features, such as the number of terms in the objective function (density) and their locality, number of variables, and polynomial order.

The "Number of qubits" indicated is not a hard limitation but represents rough thresholds where you can expect extremely consistent solution accuracy. Larger problem sizes have been successfully solved, and testing beyond these limits is encouraged.

Arbitrary qubit connectivity is supported across all problem types.

Problem typeNumber of qubitsExampleAccuracyTotal time (s)Runtime usage (s)Number of iterations
Sparsely-connected quadratic problems1563-regular Max-Cut100%176429316
Higher-order binary optimization156Ising spin-glass model100%146127216
Densely-connected quadratic problems50Fully-connected Max-Cut100%175826812
Constrained problem with penalty terms50Weighted Minimum Vertex Cover with 8% edge density100%107421510

Try running one of the benchmark algorithms by solving Max-Cut using Fire Opal.

Conclusion

This document provides an indication of the performance improvement you can expect to achieve across various algorithms using Fire Opal. The reported metrics represent current mean values across multiple benchmarking experiments, but they will continue to improve further over subsequent weeks and months as Q-CTRL methods are continuously evolving.

Validating and reproducing these metrics is welcomed and encouraged. Learn how to get started with Fire Opal, and run the Bernstein–Vazirani algorithm.

Was this useful?

cta background

New to Fire Opal?

Get access to everything you need to automate and optimize quantum hardware performance at scale.