Has quantum advantage been achieved? Part 2: Considering the evidence

Summarize this article with:
Welcome back to: Has quantum advantage been achieved? In Part 1 of this mini-series on quantum advantage demonstrations, I told you about the idea of random circuit sampling (RCS) and the experimental implementations thereof. In this post, Part 2 out of 3, I will discuss the arguments and evidence for why I am convinced that the experiments demonstrate a quantum advantage. Recall from Part 1 that to assess an experimental quantum advantage claim we need to check three criteria: Does the experiment correctly solve a computational task? Does it achieve a scalable advantage over classical computation? Does it achieve an in-practice advantage over the best classical attempt at solving the task? What’s the issue? When assessing these criteria for the RCS experiments there is an important problem: The early quantum computers we ran them on were very far from being reliable and the computation was significantly corrupted by noise. How should we interpret this noisy data? Or more concisely: Is random circuit sampling still classically hard even when we allow for whatever amount of noise the actual experiments had? Can we be convinced from the experimental data that this task has actually been solved? I want to convince you today that we have developed a very good understanding of these questions that gives a solid underpinning to the advantage claim. Developing that understanding required a mix of methodologies from different areas of science, including theoretical computer science, algorithm design, and physics and has been an exciting journey over the past years. The noisy sampling task Let us start by answering the base question. What computational task did the experiments actually solve? Recall that, in the ideal RCS scenario, we are given a random circuit CC on nnqubits and the task is to sample from the output distribution of the state obtained |C⟩\ket C from applying the circuit CC to a simple reference state. The output probability distribution of this state is determined by the Born rule when I measure every qubit in a fixed choice of basis. Now what does a noisy quantum computer do when I execute all the gates on it and apply them to its state? Well, it prepares a noisy version ρC\rho_ C of the intended state |C⟩\ket C and once I measure the qubits, I obtain samples from the output distribution of that noisy state. We should not make the task dependent on the specifics of that state or the noise that determined it, but we can define a computational task based on this observation by fixing how accurate that noisy state preparation is. The natural way to do this is to use the fidelity F(C)=⟨C|ρC|C⟩, F(C) = \bra C \rho_C \ket C, which is just the overlap between the ideal state and the noisy state. The fidelity is 1 if the noisy state is equal to the ideal state, and 0 if it is perfectly orthogonal to it. Finite-fidelity random circuit samplingGiven a typical random circuit CC, sample from the output distribution of any quantum state whose fidelity with the ideal output state |C⟩\ket C is at least δ\delta. Note that finite-fidelity RCS does not demand success for every circuit, but only for typical circuits from the random circuit ensemble. This matches what the experiments do: they draw random circuits and need the device to perform well on the overwhelming majority of those draws. Accordingly, when the experiments quote a single number as “fidelity”, it is really the typical (or, more precisely, circuit-averaged) fidelity that I will just call FF. The noisy experiments claim to have solved finite-fidelity RCS for values of δ\delta around 0.1%. What is more, they consistently achieve this value even as the circuit sizes are increased in the later experiments. Both the actual value and the scaling will be important later. What is the complexity of finite-fidelity RCS? Quantum advantage of finite-fidelity RCS Let’s start off by supposing that the quantum computation is (nearly) perfectly executed, so the required fidelity δ\delta is quite large, say, 90%. In this scenario, we have very good evidence based on computational complexity theory that there is a scalable and in-practice quantum advantage for RCS. This evidence is very strong, comparable to the evidence we have for the hardness of factoring and simulating quantum systems. The intuition behind it is that quantum output probabilities are extremely hard to compute because of a mechanism behind quantum advantages: destructive interference. If you are interested in the subtleties and the open questions, take a look at our survey. The question is now, how far down in fidelity this classical hardness persists? Intuitively, the smaller we make δ\delta, the easier finite-fidelity RCS should become for a classical algorithm (and a quantum computer, too), since the freedom we have in deviating from the ideal state in our simulation becomes larger and larger. This increases the possibility of finding a state that turns out to be easy to simulate within the fidelity constraint. Somewhat surprisingly, though, finite-fidelity RCS seems to remain hard even for very small values of δ\delta. I am not aware of any efficient classical algorithm that achieves the finite-fidelity task for δ\delta significantly away from the baseline trivial value of 2−n2^{-n}. This is the value a maximally mixed or randomly picked state achieves because a random state has no correlation with the ideal state (or any other state), and 2−n2^{-n} is exactly what you expect in that case (while 0 would correspond to perfect anti-correlation). One can save some classical runtime compared to solving near-ideal RCS by exploiting a reduced fidelity, but the costs remain exponential. To classically solve finite-fidelity RCS, the best known approaches are reported in the papers that performed classical simulations of finite-fidelity RCS with the parameters of the first Google and USTC experiment (classSim1, classSim2). To achieve this, however, they needed to approximately simulate the ideal circuits at an immense cost. To the best of my knowledge, all but those first two experiments are far out of reach for these algorithms. Getting the scaling right: weak noise and low depth So what is the right value of δ\delta at which we can hope for a scalable and in-practice advantage of RCS experiments? When thinking about this question, it is helpful to keep a model of the circuit in mind that a noisy experiment runs. So, let us consider a noisy circuit on nn qubits with dd layers of gates and single-qubit noise of strength ε\varepsilon on every qubit in every layer. In this scenario, the typical fidelity with the ideal state will decay as F∼exp(−εnd)F \sim \exp(- \varepsilon n d). Any reasonably testable value of the fidelity needs to scale as 1/𝗉𝗈𝗅𝗒(n)1/\mathsf{poly}(n), since eventually we need to estimate the average fidelity FF from the experimental samples and this typically requires at least 1/F21/F^2 samples, so exponentially small fidelities are experimentally invisible. The polynomial fidelity δ\delta is also much closer to the near-ideal scenario (δ≥\delta \geq90%) than the trivial scenario (δ=2−n\delta = 2^{-n}). While we cannot formally pin this down, the intuition behind the complexity-theoretic evidence for the hardness of near-ideal RCS persists into the δ∼1/𝗉𝗈𝗅𝗒(n)\delta \sim 1/\mathsf{poly}(n) regime: to sample up to such high precision, we still need a reasonably accurate estimate of the ideal probabilities, and getting this is computationally extremely difficult. Scalable quantum advantage in this regime is therefore a pretty safe bet. How do the parameters of the experiment and the RCS instances need to scale with the number of qubits nn to experimentally achieve the fidelity regime? The limit to consider is one in which the noise rate decreases with the number of qubits, while the circuit depth is only allowed to increase very slowly. It depends on the circuit architecture, i.e., the choice of circuit connectivity, and the gate set, through a constant cAc_A as I will explain in more detail below. Weak-noise and low-depth scaling(Weak noise) The local noise rate of the quantum device scales as ε Random Quantum Circuits Transform Local Noise into Global White Noise. Commun. Math. Phys. 405, 78 (2024). (PT1)vMorvan, A. et al. Phase transitions in random circuit sampling. Nature 634, 328–333 (2024). (PT2) Ware, B. et al. A sharp phase transition in linear cross-entropy benchmarking. arXiv:2305.04954 (2023). (Spoof1) Barak, B., Chou, C.-N. & Gao, X. Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits. in 12th Innovations in Theoretical Computer Science Conference (ITCS 2021) (ed. Lee, J. R.) vol. 185 30:1-30:20 (2021). (Spoof2) Gao, X. et al. Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage. PRX Quantum 5, 010334 (2024). (SimIQP) Bremner, M. J., Montanaro, A. & Shepherd, D. J. Achieving quantum supremacy with sparse and noisy commuting quantum computations. Quantum 1, 8 (2017). (SimRCS) Aharonov, D., Gao, X., Landau, Z., Liu, Y. & Vazirani, U. A polynomial-time classical algorithm for noisy random circuit sampling. in Proceedings of the 55th Annual ACM Symposium on Theory of Computing 945–957 (2023). Share this:Tweet Share on Reddit (Opens in new window) Reddit More Email a link to a friend (Opens in new window) Email PocketLike Loading...
