Algorithmic benchmarking results
An overview of the latest benchmarking results achieved using Q-CTRL performance management on IBM Quantum services
Using Q-CTRL's performance-management option, you can run larger and more complex algorithms on top of the IBM Quantum platform while still maintaining quality execution results. At Q-CTRL, benchmarking is performed continuously to test the performance of new error suppression methods. This document reveals some of the latest benchmarking results achieved across different algorithms, which are adapted from the QED-C Application-Oriented Performance Benchmarks for Quantum Computing.
These results push past many of the known thresholds of what can be run successfully on quantum hardware. It's highly encouraged that you try reproducing these results on your own using Q-CTRL's performance management or try outperforming them using alternative methods. These results are continuously improving and will be updated periodically as the size of experiments grows closer to utility-scale.
Success metrics
- Success probability: The probability that a single execution (shot) of the circuit will yield the correct solution. The ideal value is 1.
- Selectivity: A way to measure the signal-to-noise ratio or how much the right answer stands out relative to the other (wrong) choices. More precisely, selectivity is a ratio of the success probability to the probability of the most likely incorrect bitstring. Greater than 1 is ideal.
- Confidence level: The odds of running the circuit N times (shots) and picking a correct solution instead of the incorrect alternatives. The ideal value is 100%.
- Hellinger fidelity: A function of the Hellinger distance, which is used to quantify the similarity between two probability distributions. The ideal value is 1.
Multiple success metrics are used because together they combine to tell a complete story. As problem sizes get larger, the number of possible outcomes increases vastly, meaning that the success probability naturally decreases. Selectivity provides an indication that even a low success probability can be meaningful.
A selectivity greater than one implies that the solution bitstring is much more probable than any other, and the signal can be amplified by simply performing further executions (averaging). As you repeat the experiment, you will keep getting the correct solution as the most probable answer, and this boosts the confidence that the algorithm is working as expected, as opposed to obtaining this result by chance.
With several measures of success probability and selectivity greater than one, the confidence level can be measured and shown to converge to 100%. Together, these three metrics provide a more complete view of how most algorithms are performing. Some algorithms also have specific metrics that can be used to quantify performance.
For algorithms where the ideal distribution is known, Hellinger fidelity can be used to compare the generated and ideal distributions. In the example of QAOA, metrics specific to the problem are defined.
Configuration information
The results were collected using 127-qubit IBM devices: IBM Brisbane and IBM Sherbrooke. Daily benchmark tests are run and these represent mean values.
The "Q-CTRL + IBM" label represents results using Q-CTRL's performance management option, whereas "Without Q-CTRL" measurements were taken using the built-in error suppression and mitigation in Qiskit Runtime (optimization_level=3
and resilience_level=1
).
Bernstein–Vazirani algorithm
In the following figure, the success probability of a Bernstein–Vazirani algorithm is shown as a function of the number of qubits, ranging from 10 to 45. Up to and including 45 qubits, the algorithm is still able to deliver good results with selectivity greater than one.
Success probability
Selectivity
Because the selectivity is greater than one, the confidence level can be increased by running more iterations (shots). In the plot of "Confidence level," it's clear that greater than 99% confidence can be reached across all three circuit sizes—arriving there just takes more shots and averaging as the number of qubits increases. Still, achieving 99% confidence only takes about 1000 shots with Q-CTRL.
Achieving this level of confidence isn't possible above 10 qubits without Q-CTRL. Because selectivity is negative, increasing the number of shots will continue to generate a random spread of results.
Confidence level
Quantum Fourier transform
The following set of figures show the success probability and selectivity of running the Quantum Fourier transform algorithm up to 20 qubits. Since this algorithm is more complex than the previous, scaling to higher qubit counts poses a greater challenge.
Without Q-CTRl performance management, the probability of getting the correct answer on even a single shot drops to zero at 12-qubit QFT. However, with Q-CTRL, you're still able to get the correct answer up to 20 qubits.
Selectivity remains above 1 up to 14 qubits with Q-CTRL, which means that signal is very strong. From 16 to 20 qubits, selectivity is still positive, which indicates that the correct answer is still the most probable answer.
Quantum phase estimation
The following figures show the success probability and selectivity for the quantum phase estimation algorithm. Using the Q-CTRL performance management strategy, you can achieve meaningful results running this algorithm up to 16 qubits, compared to only 8 qubits without Q-CTRL.
Greenberger–Horne–Zeilinger (GHZ) state
Entanglement is a resource used in quantum computing that links qubits together in a truly “non-classical” way. Generating a large entagled GHZ state is notoriously difficult.
The current record is a 32-qubit GHZ state, realized earlier this year. The combination of Q-CTRL and IBM Quantum has surpassed that record. It's possible to generate GHZ states up to about 60 qubits using Q-CTRL's error suppression.
Here, Hellinger fidelity is a metric that compares the expected and actual outputs of the machine following state generation—higher is better. The horizontal dashed line near 0.5 is a typical threshold used to identify whether a state passes a test as verifiably entangled. At 60 qubits, the value is 0.501.
The following table provides the numerical data for previous figure.
Number of qubits | Q-CTRL + IBM Hellinger fidelity | Hellinger fidelity without Q-CTRL |
---|---|---|
30 | 0.782 | 0.451 |
40 | 0.694 | 0.380 |
50 | 0.570 | 0.224 |
60 | 0.501 | 0.020 |
80 | 0.312 | 0 |
100 | 0.111 | 0 |
Quantum approximate optimization algorithm (QAOA)
The algorithmic performance of the quantum approximate optimization algorithm (QAOA) was benchmarked using randomly generated MaxCut problems of 3-regular graphs (where each node is connected to exactly three other nodes) with a unique solution. In the QAOA implementation benchmarked, p=1, where p is an integer parameter which dictates the depth of the ansatz, and thus affects the approximation quality.
Since QAOA is an iterative algorithm, the individual metrics generated after each circuit execution (the quantum piece of the quantum-classical hybrid algorithm) are not as useful as a measure of the overall performance of the algorithm. Rather, it's more indicative to look at the quality of the final solution.
The most important indicator is whether or not the actual "max cut", the optimal solution, was also found as the solution by the QAOA algorithm. This means a correct answer was given. Q-CTRL enables you to get the correct max cut on problems up to 100 qubits.
Here, Q-CTRL performance management is benchmarked against a "brute-force" classical solution, which gives a random distribution of results at this scale. These random results are comparable to what you would achieve using the default hardware performance as well. The improvement of Q-CTRL is visually indicated by the shift of the distribution of results to the right, which indicates higher cut values, i.e. better solutions.
Note that these results are achieved using that are included in Q-CTRL's hardware-optimized QAOA Solver. Sign up for Fire Opal to access the Solver.
The following images show randomly generated MaxCut problems of 80-qubit, 100-qubit, and 120-qubit sizes. Q-CTRL found the correct answer each time.
Even at the scale of 120 qubits, the solver is still able to determine the correct maximum cut value. The likelihood of finding this answer randomly is incredibly low, and this answer occurs in the distribution multiple times, meaning that the solver consistently finds high quality answers.
Conclusion
This document provides an indication of the performance improvement you can expect to achieve across various algorithms using Q-CTRL performance management on IBM Quantum services. The reported metrics represent current mean values across multiple benchmarking experiments, but they will continue to improve further over subsequent weeks and months as Q-CTRL methods are continuously evolving.
Validating and reproducing these metrics is welcomed and encouraged. Learn how to get started with Q-CTRL performance management, and run the Bernstein–Vazirani algorithm.