Quantum Algorithms Now Need Far Fewer Measurements Thanks to New Grouping Technique

Summarize this article with:
Jeremiah Rowland and colleagues at Michigan State University have found that overlapping measurement groupings sharply improve the accuracy of energy estimations in quantum computations. The technique, where operators appear in multiple measurement groups, achieves maximal variance reduction proportional to the number of Hamiltonian terms. A collaboration between Michigan State University and the University of California introduces a new ‘repacking’ algorithm to optimise these groupings and iteratively lower variance. Numerical simulations, performed on Hamiltonians containing up to 44 qubits and 575,000 terms, reveal that this approach scales favourably, offering a potentially key strategy for quantum energy estimation on increasingly large and complex systems. Multiple compatible groupings unlock substantial variance reduction in quantum energy estimation A variance reduction of 575,000 terms was achieved in energy estimation, surpassing limits previously imposed by disjoint grouping methods. Traditionally, quantum algorithms employing grouping-based measurement strategies have relied on assigning each operator, a mathematical function representing a physical property, to a single, disjoint group. This approach, while simplifying measurement complexity, inherently limits the potential for variance reduction. The new research demonstrates that allowing operators to appear in multiple compatible groups unlocks a maximal variance reduction linear with the number of Hamiltonian terms, a departure from these earlier constraints. The Hamiltonian, in this context, represents the total energy of the quantum system being simulated. This breakthrough enables calculations on systems with up to 44 qubits and 575,000 terms, a scale previously inaccessible for precise energy estimation, particularly in fields like quantum chemistry and materials science where accurate energy calculations are paramount. Improvements of up to 2.35x compared to state-of-the-art disjoint grouping methods were observed on molecular benchmarks, with variance reduction extending linearly with problem size. Molecular benchmarks are crucial for validating quantum algorithms against known chemical properties and reaction energies. The observed linear scaling is particularly significant, suggesting that the benefits of this overlapped grouping technique will continue to accrue as quantum computers increase in size and complexity. These calculations were performed on Hamiltonians representing systems with up to 44 qubits and 575,000 terms, sharply exceeding the scale of previous numerical studies. The ability to accurately simulate systems of this size is a substantial step towards tackling real-world problems currently intractable for classical computers. To systematically transform existing groupings, a ‘repacking’ algorithm was developed, iteratively lowering variance and paving the way for more reliable quantum computations. This algorithm functions by intelligently reassigning operators between groups, minimising the overall uncertainty in the energy estimation.
The team’s deterministic procedure transforms initial groupings without requiring changes to measurement bases, enabling implementation with data from prior experiments. Measurement bases define the specific way in which quantum information is read out from the qubits. Maintaining the same measurement basis simplifies experimental implementation and allows for seamless integration with existing quantum hardware. Currently, these results rely on specific models of Hamiltonian coefficients and do not yet demonstrate consistent performance across all realistic molecular systems. Hamiltonian coefficients determine the strength of interactions between different parts of the quantum system. The dependence on specific coefficient models represents a current limitation, as real-world molecular systems often exhibit more complex and varied interactions. Further investigation will focus on broadening the algorithm’s applicability to a wider range of molecular systems and coefficient models, potentially through the incorporation of adaptive techniques that learn the optimal grouping strategy for each specific Hamiltonian. Optimising measurement grouping enhances data yield from near-term quantum devices Scientists are refining techniques to extract meaningful data from quantum computers, essential for simulating complex molecular interactions and designing new materials. Quantum computers, while promising, are susceptible to noise and errors, which can significantly degrade the accuracy of calculations. Optimising measurement strategies is therefore crucial for maximising the signal-to-noise ratio and obtaining reliable results. However, the benefits of this optimisation are currently demonstrated relative to existing ‘disjoint grouping’ methods alone. The authors acknowledge a gap in their analysis, as they do not detail performance against other established variance reduction strategies, leaving open whether this approach represents a genuinely superior technique or simply an improvement on a specific baseline. Other variance reduction techniques include techniques like expectation value propagation and symmetry verification, and a comprehensive comparison would provide a more complete picture of the algorithm’s strengths and weaknesses. Despite lacking direct comparison with all variance reduction techniques, this work offers a valuable advance in optimising quantum computations. Numerical simulations, extending to systems with 44 quantum bits, reveal that variance reduction scales linearly with problem size, suggesting potential for significant gains on larger, more complex problems. This linear scaling is a key indicator of the algorithm’s potential for scalability, as it implies that the computational cost of variance reduction will not increase exponentially with system size. A theoretical limit on how much variance can be reduced when estimating energy in quantum computations has been established, demonstrating a linear relationship with the complexity of the system. This theoretical understanding provides a benchmark against which the performance of the new algorithm can be evaluated and helps to identify areas for further improvement. By reorganising measurement groupings, it is possible to maximise the efficiency of data extraction from quantum hardware, allowing measurement operators, the building blocks of calculations, to be shared between multiple groups iteratively lowering uncertainty in results. This sharing of operators effectively increases the number of measurements performed for each operator, leading to a more accurate estimate of its corresponding energy contribution. The research demonstrated that reorganising how measurements are grouped in quantum computations can linearly reduce the uncertainty in energy estimations, scaling with the number of terms in the Hamiltonian. This is significant because it offers a method to improve the efficiency of extracting data from quantum hardware, potentially enabling more accurate results. The authors proved a theoretical limit to variance reduction and showed their ‘repacking’ algorithm iteratively lowers uncertainty through shared measurement operators. Simulations with systems of up to 44 qubits and 575,000 terms confirmed this linear scaling, although further work is needed to compare this approach with other established variance reduction strategies. 👉 More information🗞 Overlapped groupings for quantum energy estimation: Maximal variance reduction and deterministic algorithms for reducing variance🧠 ArXiv: https://arxiv.org/abs/2604.07156 Tags:
