Back to News
quantum-computing

Single-Shot Quantum Networks Promise Far Fewer Measurements for Accurate Results

Quantum Zeitgeist
Loading...
10 min read
0 likes
Single-Shot Quantum Networks Promise Far Fewer Measurements for Accurate Results

Summarize this article with:

Jaemin Seo at the Chung-Ang University, and colleagues have developed a framework integrating quantum amplitude estimation into the readout stage of quantum neural networks, creating a “single-shot” network. The approach achieves a sharply improved error rate of $\mathcal{O}(1/N)$ with only one measurement, compared to the $\mathcal{O}(1/\sqrt{N})$ error typical of conventional Monte-Carlo inference. This advancement reduces the computational cost of both training and utilising QNNs on current and near-term quantum hardware, and demonstrates the potential for quantum algorithms to optimise quantum machine learning. Quantum neural networks and the variational quantum circuit approach Superposition and entanglement in quantum devices provide algorithmic advantages in areas including optimisation, simulation, and probabilistic inference. The recent success of machine learning has stimulated intense interest in combining quantum computation with learning paradigms, giving rise to the rapidly growing field of quantum machine learning (QML). Within this context, quantum neural networks (QNNs) have emerged as a prominent class of variational quantum models, aiming to extend the expressive power of classical neural networks into the quantum domain. A QNN typically consists of a parameterised quantum circuit in which classical input data are encoded into qubit states through unitary rotations, followed by layers of trainable quantum gates. QNNs offer several attractive features, including compact representations in high-dimensional Hilbert spaces, natural compatibility with quantum data, and the potential for favourable scaling in specific tasks. Researchers have investigated them in a wide range of applications, from classification and regression to physics-informed modelling and surrogate representations of complex quantum systems. A single forward pass in a classical model deterministically produces an output for a given input. However, the intrinsic uncertainty of quantum measurements means the output of a QNN is not a deterministic value but a random variable whose statistics are governed by the Born rule. Consequently, the desired output must be inferred from repeated executions of the same circuit with the same input, each followed by a measurement, and the final prediction is obtained as an empirical expectation value. This inference procedure is effectively a Monte Carlo (MC) sampling process. Estimating an output probability from N repeated shots results in a standard error that scales O(1/ √ N). This sampling error imposes a significant computational burden; achieving even modest accuracy ∼1% typically requires tens of thousands of circuit executions. Such large shot counts induce substantial time overhead on superconducting or trapped-ion platforms due to repeated initialisation, measurement, and control cycles. The situation is even more difficult for photonic quantum platforms, where each shot requires the physical generation of new qubits (photons). In many photonic implementations, producing a single effective photon can demand hundreds to thousands of high-power laser pulses, making multi-shot MC-based inference prohibitively expensive. These limitations of multi-shot inference extend beyond hardware considerations, as repeated measurements are intrinsically restricted in certain applications. For instance, inference on invasive or fragile biological samples, experiments involving high-cost optical setups, or scenarios in which one aims to predict extremely rare events with very small probabilities all present challenges. In these cases, the requirement of large numbers of identical circuit executions makes conventional MC-based readout unsuitable, regardless of the underlying quantum hardware. However, from an algorithmic perspective, this sampling bottleneck is not fundamental. For probability estimation problems, quantum amplitude estimation (AE) provides a principled mechanism to reduce the required resources quadratically, from O(N) to O( √ N), compared to classical MC sampling. This reduction does not stem from repeated measurements but from coherent quantum interference enabled by Grover-type iterations. The √ N scaling reflects the number of controlled Grover operations applied within the circuit, while the number of measurement shots required for readout can be reduced to a “single shot” or a few shots while maintaining high estimation accuracy. The researchers introduce a framework that integrates quantum amplitude estimation with quantum neural networks to dramatically reduce the resources and shots required for QNN inference. By embedding a fixed QNN within an amplitude estimation protocol, they demonstrate that output probabilities can be inferred with accuracy comparable to conventional MC-based readout while using orders of magnitude fewer measurement shots. Remarkably, accurate inference is achievable even in the single-shot limit. This capability is particularly relevant for photonic quantum platforms, where qubit generation constitutes a dominant cost, and suggests a viable pathway toward practical QML inference on hardware where standard sampling-based approaches are infeasible. More broadly, the results highlight a key principle in quantum computation: when the model itself is quantum, quantum algorithms can offer decisive advantages not only in training or optimisation but also in the readout and inference stages. By explicitly exploiting quantum algorithms for probabilistic estimation, they show that the apparent sampling overhead of QNNs is not an inherent limitation but can be overcome through algorithmic design. QNNs are commonly formulated using parameterised quantum circuits (PQCs), which serve as variational models operating on quantum states. A PQC consists of a sequence of unitary operations whose structure is fixed, while a set of continuous parameters are optimised through classical computational feedback. Given their analogy to classical neural networks, where parameters are updated to minimise a loss function, PQCs provide a natural foundation for constructing QNNs. Consider an n-qubit quantum register initialised in a reference state |0⟩⊗n. For a given classical input x, data are encoded into the quantum state through an input-dependent unitary operation Uenc(x). This is followed by a parameterised unitary operator U(θ) = ∏l=1Y Ul(θl), where θ = {θ1, ., θP} denotes the trainable parameters of the QNN. The overall state preparation can thus be written as |ψ(x; θ)⟩= U(θ)Uenc(x) |0⟩⊗n. The circuit U = {Ul} is typically composed of alternating layers of single-qubit rotations and entangling gates, ensuring sufficient expressivity. Learning adjusts θ such that the quantum state |ψ(x; θ)⟩encodes task-relevant information about the input x. Unlike classical neural networks, the output of a QNN cannot be directly read from the final state vector; instead, information is extracted via quantum measurement. Let O be a Hermitian observable acting on the system, or equivalently a projector corresponding to a “good” subspace (|ψ1⟩), such as |1⟩⊗n. The QNN output is defined as the expectation value p(x; θ) = ⟨ψ(x; θ)| O |ψ(x; θ)⟩, which can be interpreted as a probability or regression output depending on the application. In practice, this expectation value cannot be accessed in a single execution of the circuit. Instead, the circuit must be executed repeatedly, and the observable must be measured multiple times. For a binary measurement outcome, each circuit execution produces a Bernoulli random variable, and the estimator p obtained from N repeated shots satisfies Var(p) = p(1 −p) / N, implying a standard error that scales as O(1/ √ N).

This Monte Carlo (MC) sampling noise is intrinsic to quantum measurement and constitutes a fundamental overhead in QNN inference. Training a QNN proceeds by minimising a classical loss function L, defined in terms of the measured outputs p(x; θ) and target values. Gradient-based optimisation methods are commonly employed, requiring the evaluation of partial derivatives ∂L/∂θi, or equivalently ∂p/∂θi. For PQCs, these derivatives can be computed exactly using the parameter-shift rule, which for a single parameter θi takes the form ∂p /∂θi = 1/2 [p(x; θ+ i ) −p(x; θ− i )], where θ± i = θ ± (π/2)ei, and ei is the unit vector in parameter space. Consequently, computing the gradient with respect to P parameters requires at least P +1 distinct circuit evaluations per training step: one for the forward pass and P for each parameter shift. When each expectation value is itself estimated using N shots, the total number of circuit executions per optimisation step scales as (P + 1)N. This quickly leads to a substantial sampling cost, particularly for QNNs with many parameters or for datasets requiring repeated evaluations. Therefore, QNN training is significantly more resource-intensive than inference. While inference requires repeated shots only to estimate a single expectation value, training involves repeated measurements across many parameter-shifted circuits. This disparity is especially problematic for platforms where shot execution is expensive. Quantum amplitude estimation (AE) is a quantum algorithm designed to estimate the probability amplitude associated with a target subspace more efficiently than classical MC sampling. Its foundation lies in Grover’s algorithm and the principle of amplitude amplification. Consider a unitary operator (or an oracle) A that prepares a quantum state A |0⟩⊗n = √ 1 −a |ψ0⟩+ √a |ψ1⟩, where |ψ1⟩spans the “good” subspace (|ψ0⟩is the other) and a ∈ is the target probability to be estimated. Classical MC estimation of a requirements O(1/ε2) samples to achieve an additive error ε. Grover’s amplitude amplification constructs a unitary operator Q = −AS0A†Sχ, where S0 and Sχ are selective phase reflections about the initial state and the good subspace, respectively. Repeated application of Q performs a rotation in the two-dimensional subspace spanned by {|ψ0⟩, |ψ1⟩}, coherently amplifying the amplitude of the good state. Canonical AE combines amplitude amplification with quantum phase estimation enabled by this rotation structure. Writing a = sin2 φ, the Grover operator Q has eigenvalues e±2iφ. Applying quantum phase estimation to Q allows estimation of the phase φ and thereby recovery of a. In its standard formation, AE employs an evaluation register of m qubits and performs a sequence of controlled Grover operations Q2k for k = 0, m− 1, followed by an inverse quantum Fourier transform (QFT). Measuring the evaluation register yields an m-bit estimate of the phase, from which the amplitude is reconstructed as a = sin2(πz/2m), where z ∈{0, 1, , 2m−1 −1, 2m−1} denotes the measurement outcome index. The resulting estimation error scales as O(1/2m), corresponding to a quadratic improvement over classical MC methods in terms of the number of Grover queries Nquery = 2m. A key distinction is that this quadratic speedup is achieved through coherent quantum evolution rather than repeated measurements. While the canonical AE circuit contains O(2m) Grover iterations, the number of measurement shots required to extract the estimate can be as low as a single shot or a few shots to suppress readout noise. Single-shot inference minimises measurement burden in quantum machine learning Quantum neural networks promise a route to machine learning tasks intractable for classical computers, but realising this potential demands overcoming a fundamental hurdle: the sheer number of measurements needed to train and operate these networks. This new framework, integrating quantum amplitude estimation to achieve single-shot inference, offers a compelling solution by drastically reducing the demand for repeated circuit executions. This advance offers a pathway to utilising quantum machine learning even with relatively small and imperfect quantum processors, potentially unlocking applications beyond the reach of today’s technology. Researchers have developed a new framework utilising quantum amplitude estimation, significantly reducing the computational demands of quantum neural networks. This approach estimates outputs through coherent interference, bypassing the need for numerous circuit executions which plague current quantum machine learning. This work suggests a path towards more efficient training and operation of quantum machine learning models, even on near-term hardware. The research demonstrated a quantum neural network framework achieving an error rate of 1/N with a single measurement, a substantial improvement over the 1/sqrt(N) error typical of conventional Monte-Carlo inference. This is significant because it reduces the number of costly qubit generations needed for quantum neural network operation. By integrating quantum amplitude estimation into the readout stage, outputs are estimated through coherent interference rather than repeated sampling. The authors analysed noise robustness and training feasibility, suggesting quantum algorithms can enhance the efficiency of quantum computation. 👉 More information 🗞 Single-shot quantum neural networks with amplitude estimation 🧠 ArXiv: https://arxiv.org/abs/2604.19320 Tags:

Read Original

Tags

quantum-machine-learning
quantum-investment
quantum-algorithms
quantum-hardware
quantum-communication

Source Information

Source: Quantum Zeitgeist