Back to News
quantum-computing

Simulations Improve Quantum Algorithm Performance on Larger Systems

Quantum Zeitgeist
Loading...
5 min read
0 likes
Simulations Improve Quantum Algorithm Performance on Larger Systems

Summarize this article with:

Ryo Watanabe and colleagues at The University of Osaka, in collaboration with Boston University, Centre for Computational Quantum Physics, and Flatiron Institute, have developed a two-dimensional tensor network that accurately simulates quantum circuits used in the Quantum Approximate Optimisation Algorithm applied to Ising spin-glass problems. The work reveals key limits to parameter transfer strategies for scaling these algorithms, and shows how extending training with larger system sizes can avoid computational pitfalls. This tensor network framework offers a classically feasible and controlled method for benchmarking and effectively training variational quantum algorithms on two-dimensional lattices Tensor networks enable accurate simulation of deep quantum approximate optimisation circuits A tensor-network (TN) framework now simulates Quantum Approximate Optimisation Algorithm (QAOA) circuits on two-dimensional qubit architectures, achieving accurate simulations of circuits with depths previously unattainable. Tensor networks are a powerful classical tool for representing and manipulating many-body quantum states, offering a more efficient alternative to traditional methods like state-vector simulations which suffer from exponential scaling with system size. This particular implementation leverages a two-dimensional structure, specifically designed to mirror the connectivity of physical qubit architectures commonly considered for quantum computation. Parameters trained on small instances and transferred to systems an order of magnitude larger previously improved sampled energy distribution only up to intermediate circuit depths. This new framework extends that improvement to deeper circuits, unlocking the potential for more complex quantum computations. The ability to accurately simulate deeper circuits is crucial because the performance of QAOA, and many other variational quantum algorithms, is often expected to improve with increasing circuit depth, up to a certain point. The advance bypasses limitations of both current quantum hardware and existing classical simulation techniques, enabling exploration of QAOA performance at scales beyond the reach of state-vector simulations. Current noisy intermediate-scale quantum (NISQ) devices are limited by qubit count, coherence times, and gate fidelities, hindering the execution of deep and complex circuits. Classical simulations, while deterministic, are often restricted by the exponential growth of the Hilbert space with the number of qubits. The framework successfully simulated circuits applied to Ising spin-glass problems on both heavy-hexagonal and square lattices, even those incorporating up to three-body interactions. Ising spin-glasses are disordered magnetic systems that serve as a challenging testbed for quantum algorithms due to their complex energy landscapes and many local minima. Three-body interactions, while less common in physical systems, add to the complexity and test the robustness of the simulation method. Entanglement growth remained manageable, allowing for classically feasible simulations with moderate computational demands. Entanglement, a key resource in quantum computation, can dramatically increase the computational cost of classical simulations. By carefully controlling the entanglement structure within the tensor network, the researchers were able to keep the computational requirements within reasonable bounds. Accurate sampling on square lattices, however, required substantially more resources than on heavy-hexagonal lattices, indicating that connectivity significantly impacts computational cost and highlighting a remaining barrier to scaling these simulations to truly practical problem sizes. The heavy-hexagonal lattice, with its higher connectivity, provides a more efficient substrate for tensor network contractions, reducing the computational burden compared to the square lattice. Scaling limits of parameter transfer in tensor network quantum algorithm simulations Scientists face a key challenge in simulating quantum algorithms as they strive to build practical quantum computers.

This research offers a new way to classically model complex quantum circuits, specifically those used in the Quantum Approximate Optimisation Algorithm (QAOA), for solving difficult problems. QAOA is a hybrid quantum-classical algorithm designed to find approximate solutions to combinatorial optimisation problems. It relies on iteratively optimising a set of variational parameters to minimise an energy function.

The team found that transferring pre-trained parameters from smaller simulations to larger ones plateaus at a certain circuit depth, suggesting a fundamental constraint on how well solutions learned on small systems generalise. This phenomenon is particularly relevant as researchers attempt to leverage limited quantum resources by training algorithms on smaller, more manageable quantum devices and then transferring the learned parameters to larger systems. This limitation isn’t a matter of computational power, but rather a characteristic of the algorithm itself. The tensor-network framework offers a new method for benchmarking variational quantum algorithms on two-dimensional lattices, accurately simulating deep quantum circuits used in QAOA. Benchmarking is essential for assessing the performance of quantum algorithms and comparing different approaches. Modelling QAOA on both heavy-hexagonal and square lattice qubit architectures demonstrated a classically feasible approach to explore algorithm performance beyond the reach of current quantum hardware. The choice of lattice structure impacts the efficiency of the tensor network simulation and provides insights into the role of qubit connectivity in algorithm performance. While parameter transfer improves results up to a point, the findings reveal that extending training directly on larger systems avoids this constraint and yields lower-energy solutions, offering a pathway to overcome the observed scaling limits. Training on larger systems allows the algorithm to adapt to the increased complexity and explore a wider range of potential solutions, leading to improved performance. Specifically, the simulations showed that continuing the optimisation process with the larger system size, rather than simply applying pre-trained parameters, consistently resulted in lower energy states, indicating a more effective solution to the Ising spin-glass problem. This suggests that the parameter landscape changes significantly as the system size increases, and that pre-trained parameters may become trapped in local minima. The research demonstrated that a tensor-network framework accurately simulates deep quantum circuits used in the Quantum Approximate Optimization Algorithm on two-dimensional lattices. This is important because it provides a way to benchmark these algorithms without needing access to large quantum computers. Results showed that transferring parameters trained on small systems to larger ones improves performance only to a certain depth, but extending the training directly on larger systems yields lower-energy solutions. The authors suggest this approach avoids limitations encountered when scaling up quantum algorithms. šŸ‘‰ More informationšŸ—ž Tensor network surrogate models for variational quantum computation🧠 ArXiv: https://arxiv.org/abs/2604.20180 Tags:

Read Original

Tags

quantum-machine-learning
energy-climate
government-funding
quantum-algorithms
quantum-hardware
partnership

Source Information

Source: Quantum Zeitgeist