Sensors Bypass Limits to Measure Faint Classical Fields

Summarize this article with:
Scientists are continually seeking methods to enhance the precision of measurements in challenging experimental settings. Jordan Cotler and Daine L. Danielson, from the Department of Physics at Harvard University, alongside Ishaan Kannan from the Harvard Quantum Initiative at Harvard University, have developed a novel quantum sensing framework called Signal Learning (QSL) that promises to overcome limitations imposed by vacuum fluctuations. Their research details an enhanced protocol utilising two-mode squeezing and passive optics to estimate multiple properties of classical signals with significantly reduced noise. This work, a collaboration between researchers within Harvard University’s Department of Physics and Harvard Quantum Initiative, demonstrates a speedup for crucial tasks such as electromagnetic correlation measurement and interferometric cavity control, establishing clear separations from existing classical strategies and paving the way for exponentially faster sensing of structured classical backgrounds. For decades, detecting faint signals has been hampered by the fundamental limits of measurement precision. Now, a new technique harnesses the principles of quantum mechanics to overcome these barriers and reveal hidden properties of classical fields. This advance promises to sharpen our ability to observe subtle phenomena across a range of scientific disciplines. Scientists are increasingly challenged by the limitations of quantum noise in precision experiments designed to detect faint classical fields. Modern experiments, ranging from gravitational-wave detectors to microwave cavities, now operate in regimes where the fundamental quantum fluctuations of the electromagnetic field obscure the signals they seek to measure. These signals are often classical in nature, ambient radiation, fluctuating forces, or control imperfections, coupled to a bosonic sensor, and current inference methods struggle to capture the structured properties of uncertain environments. Suppressing shot noise to levels below the conventional vacuum limit, and this scheme relies on two-mode squeezing, a technique that reduces quantum noise in a specific direction, alongside passive optics and standard homodyne measurements. For post-hoc classical estimation from a single experimental dataset. The approach moves beyond traditional single-parameter estimation to address distributional questions, focusing on the functional form of a field’s distribution rather than a single value. Evidence suggests this protocol delivers a speedup for common classical sensing tasks, including precise measurement of electromagnetic correlations, real-time feedback control of interferometric cavities, and efficient Fourier-domain matched filtering, a technique used to identify specific waveforms within noisy data. By introducing an optimal-transport conditioning method, scientists have demonstrated exponential separations from strategies that do not utilise entanglement, and practical speedups over conventional homodyne and heterodyne measurements. Also, when squeezing is considered a valuable resource, a protocol employing squeezed light can sense a structured classical background exponentially faster than any probe utilising solely coherent states. Unlike traditional metrology focused on estimating single parameters, QSL extends to a broader property-learning setting, enabling the estimation of complex features within the classical field. To illustrate, the ability to accurately determine quadrature correlations, relationships between the different components of the electromagnetic field, is critical in many applications. At a fundamental level, this effort places probe energy and entanglement on equal footing, viewing entanglement as the key resource driving sensing precision. Bell QSL can estimate all M scores to accuracy ε using only N = O(ε−2 log M) shots, irrespective of n — this contrasts with entanglement-free protocols requiring N ≥Ω(3n) channel uses to score a single template to constant accuracy. These findings establish a worst-case exponential separation from all entanglement-free strategies in the context of matched filtering and beamforming, and ubiquitous sensing primitives found in signal processing and gravitational-wave detection. Specifically, The project considered an n-mode resonant waveform F(t) = n ∑ k=1 √ωk ( Akcos(ωkt) + Bksin(ωkt) ). Inducing a random displacement α∈Cn with law P(α). The observed advantage relies on a structured prior over the sinusoidal coefficients {Ak, Bk} that conceals information from entanglement-free readout. To quantify the tradeoff between quantum advantage and the fraction of physically relevant instances where that advantage persists, an optimal-transport conditioning methodology was introduced. Any single-shot measurement strategy can be described as a map M sending the displacement law P to an outcome distribution M(P). Meanwhile, the project defines ωM,C(η) as the supremum of the Wasserstein-1 distance between distributions in class C. Given a total variation distance of at most η between their measured outcomes. This ωM,C quantifies how much phase-space structure is hidden from M, and thus the ill-conditioning of the inverse problem. Theorem 3 demonstrates that the sample complexity for estimating EP[f] is bounded by Ω(ωM,C(1/N)), meaning that if ωM,C > 0, no number of measurements can reduce the error below a constant. To illustrate, considering distributions P= δ(a,b) + δ(−a,−b) / 2 and Q= δ(a,−b−ε) + δ(−a,b+ε) / 2, when ε= 0, P and Q share identical {x, p} marginals despite residing in distinct phase-space quadrants. Homodyne measurements are unable to distinguish these distributions, highlighting the benefit of Bell measurement in collecting a dataset suitable for classical learning algorithms. Beyond the specific sinusoidal waveform task, the problem wedge illustrates that quantum advantage depends on the extent of phase space explored by the problem class. With the symmetry-breaking parameter ε controlling the visibility of hidden structure to restricted homodyne. As ε approaches zero, homodyne shots become non-identifiable, causing sample complexity to diverge. Encoding classical fields via continuous-variable entanglement and Bell measurement Two-mode squeezing, a technique generating correlated photons in two beams of light, underpins the experimental design described in this effort. Initially, a two-mode squeezed vacuum (TMSV) is prepared, establishing EPR correlations, a type of quantum entanglement, between a sensing mode and an idler beam. Then, an unknown classical field is applied to the sensing mode, inducing a phase-space displacement that encodes information about the field’s properties. Rather than directly measuring the field, a continuous-variable (CV) Bell measurement is performed on commuting EPR quadratures of the entangled beams. Once the Bell measurement is completed, a complex outcome, denoted as ζ, is obtained, representing a blurred snapshot of the initial phase-space displacement. Here, this outcome adheres to an additive-noise model where ζ equals the original displacement α plus an independent complex Gaussian noise term, Z. Crucially, the variance of this noise, governed by the level of squeezing (e−2r), is suppressed below the standard vacuum level, enabling enhanced precision. In turn, this contrasts with heterodyne detection, which, while informationally complete, retains an irreducible vacuum noise component. A framework modelling classical-field sensing as estimating properties of the phase-space response induced by a linear bosonic coupling — optimal-transport conditioning was introduced to establish separations between the proposed protocol and entanglement-free strategies. Through leveraging the entangled state and performing the Bell measurement, the system circumvents the limitations of conventional techniques. Offering a pathway to quantum-enhanced sensing of classical fields. The choice of two-mode squeezing and Bell measurements isn’t arbitrary; it provides a means to simultaneously estimate both quadratures of the signal. This project suggests a pathway to bypass that limit, though not without caveats. The implications extend beyond achieving lower noise floors. A critical aspect of this advance rests on the energy used to create squeezed light, a non-classical state of light that reduces noise. Unlike traditional methods, the benefit isn’t necessarily about doing more with less energy overall, but rather about intelligently allocating that energy to maximise information gain. The observed advantage isn’t a violation of fundamental limits, but a consequence of effectively expanding the computational space available to the sensor. By leveraging squeezing, the system gains access to a larger “Hilbert space”, akin to adding more qubits in a quantum computer. For a fixed energy budget, this advantage diminishes, highlighting that squeezing is best viewed as a resource alongside others, not a magic bullet. Inside the debate about quantum advantage, this effort clarifies that the gains arise from entanglement, a uniquely quantum property. Rather than simply squeezing light itself. At present, maintaining squeezed states and performing the necessary measurements remain technically demanding — a key question is whether these speedups translate into practical benefits when dealing with highly complex, real-world signals. Since the theoretical gains depend on specific signal structures, further research must explore the robustness of this approach across a wider range of scenarios, and however, the development of optimal-transport conditioning methods presented here provides a powerful new tool for analysing and designing sensing strategies. Potentially opening doors to even more sophisticated techniques in the future. 👉 More information 🗞 Quantum Advantage for Sensing Properties of Classical Fields 🧠 ArXiv: https://arxiv.org/abs/2602.17591 Tags:
