Algorithmic benchmarking results
Evaluating Fire Opal performance improvement across multiple benchmark algorithms
Introduction
Benchmarks are a common industry practice to determine the performance of quantum computing systems. The usefulness of a quantum computer is ultimately determined by its ability to successfully implement meaningful and relevant quantum algorithms in reasonable times and with reasonable resources.
Fire Opal boosts algorithmic success on quantum computers, and makes it possible to run algorithms on NISQ devices. This article provides an overview of the benchmarked improvements of Fire Opal run on IBM backends, compared to bare execution without Fire Opal.
At Q-CTRL, benchmarking is performed continuously to test the performance of new error suppression methods. This document reveals some of the latest benchmarking results achieved across different algorithms, which are adapted from the QED-C Application-Oriented Performance Benchmarks for Quantum Computing. These benchmarking results can help to set expectations for what sorts of improvements you will see with Fire Opal, but it is still highly encouraged to try it out yourself as Fire Opal's error suppression capabilities are constantly improving.
Benchmarks were carried out across a set of algorithms with widely varying characteristics (connectivity, circuit depth, etc): Bernstein–Vazirani (BV), quantum Fourier transform (QFT), and quantum approximate optimization algorithm (QAOA). Certain algorithms are inherently better suited for certain devices, due to device connectivity and layouts, but Fire Opal generally boosts algorithmic success probability across devices and algorithms.
Benchmarking Results
Published algorithmic benchmarking results demonstrate significant performance improvement across various algorithms, including Bernstein-Vazirani, quantum Fourier transform, Grover’s search, quantum approximate optimization algorithm, and variational quantum eigensolver. The rest of this section provides more details about types of algorithms you can run, as well as the expected performance and runtimes.
The following independent studies demonstrate how Q-CTRL's Performance Management enables algorithmic research at record-breaking scale:
- Parametrized Energy-Efficient Quantum Kernels for Network Service Fault Diagnosis - up to 50-qubit quantum kernel learning
- Tensor-based quantum phase difference estimation for large-scale demonstration - up to 33-qubit quantum phase estimation
- Hierarchical Learning for Quantum ML: Novel Training Technique for Large-Scale Variational Quantum Circuits - up to 21-qubit quantum data loading
Deterministic quantum algorithms
The following table provides a rough guide on accuracy and runtimes from prior benchmarking runs on ibm_fez
. Performance on other devices may vary. The usage time is based on an assumption of 10,000 shots per circuit. The "Number of qubits" indicated is not a hard limitation but represents rough thresholds where you can expect extremely consistent solution accuracy. Larger problem sizes have been successfully solved, and testing beyond these limits is encouraged.
Example | Number of qubits | Accuracy | Measure of accuracy | Total time (s) | Runtime usage (s) |
---|---|---|---|---|---|
Bernstein–Vazirani | 50Q | 100% | Success Rate (Percentage of runs where the correct answer is the highest count bitstring) | 10 | 8 |
Quantum Fourier Transform | 30Q | 100% | Success Rate (Percentage of runs where the correct answer is the highest count bitstring) | 10 | 8 |
Quantum Phase Estimation | 30Q | 99.9998% | Accuracy of the angle found: 1- abs(real_angle - angle_found)/pi | 10 | 8 |
Quantum simulation: Ising model (15 steps) | 20Q | 99.775% | $A$ (see following definition) | 60 (per step) | 15 (per step) |
Quantum simulation 2: molecular dynamics (20 time points) | 34Q | 96.78% | $A_{mean}$ (see following definition) | 10 (per time point) | 6 (per time point) |
Defining the accuracy of the measurement of an expectation value - the metric $A$ is defined as follows: \begin{equation} A = 1 - \frac{|\epsilon^{ideal} - \epsilon^{meas}|}{\epsilon^{ideal}_{max} - \epsilon^{ideal}_{min}}, \end{equation} where $ \epsilon^{ideal} $ = ideal expectation value, $ \epsilon^{meas} $ = measured expectation value, $\epsilon^{ideal}_{max} $ = ideal maximum value, and $\epsilon^{ideal}_{min}$ = ideal minimum value. $A_{mean}$ refers to the average of the value of $A$ across multiple measurements.
This metric is used because it is invariant to global shifts and scaling in the range of attainable values. In other words, regardless of whether you shift the range of possible expectation values higher or lower or increase the spread, the value of $A$ should remain consistent.
Optimization problems
The following benchmarking results are achieved through Fire Opal's QAOA Solver, which fully optimizes and automates the entire algorithm, from error suppression at the hardware level to efficient problem mapping and closed-loop classical optimization. Behind the scenes, the Solver's pipeline reduces errors at every stage, enabling the enhanced performance required to meaningfully scale. The underlying workflow is inspired by the Quantum Approximate Optimization Algorithm (QAOA), which is a hybrid quantum-classical algorithm. For a detailed summary of the full Optimization Solver workflow, refer to the published manuscript.
The following benchmark metrics provide a rough indication of the accuracy and scaling of problem types based on prior benchmarking runs on ibm_fez
. Actual metrics may differ based on various problem features, such as the number of terms in the objective function (density) and their locality, number of variables, and polynomial order.
The "Number of qubits" indicated is not a hard limitation but represents rough thresholds where you can expect extremely consistent solution accuracy. Larger problem sizes have been successfully solved, and testing beyond these limits is encouraged.
Arbitrary qubit connectivity is supported across all problem types.
Problem type | Number of qubits | Example | Accuracy | Total time (s) | Runtime usage (s) | Number of iterations |
---|---|---|---|---|---|---|
Sparsely-connected quadratic problems | 156 | 3-regular Max-Cut | 100% | 1764 | 293 | 16 |
Higher-order binary optimization | 156 | Ising spin-glass model | 100% | 1461 | 272 | 16 |
Densely-connected quadratic problems | 50 | Fully-connected Max-Cut | 100% | 1758 | 268 | 12 |
Constrained problem with penalty terms | 50 | Weighted Minimum Vertex Cover with 8% edge density | 100% | 1074 | 215 | 10 |
Try running one of the benchmark algorithms by solving Max-Cut using Fire Opal.
Conclusion
This document provides an indication of the performance improvement you can expect to achieve across various algorithms using Fire Opal. The reported metrics represent current mean values across multiple benchmarking experiments, but they will continue to improve further over subsequent weeks and months as Q-CTRL methods are continuously evolving.
Validating and reproducing these metrics is welcomed and encouraged. Learn how to get started with Fire Opal, and run the Bernstein–Vazirani algorithm.