Quantum Optimisation Cuts Measurement Needs with New Bayesian Approach

Summarize this article with:
Scientists Siran Zhang and Shuming Cheng at Tongji University have developed a resource-efficient Quantum Approximate Optimisation Algorithm (QAOA) framework that optimises performance by focusing on the cut value of the most probable measured bitstring, alongside Bayesian optimisation and adaptive shot allocation. The method achieves solution quality comparable to conventional QAOA, but requires fewer measurements to reach the same accuracy. It offers a pathway to more practical quantum optimisation, particularly valuable when measurement resources are constrained. Maximising discrete MaxCut solutions with reduced quantum measurement demands A reduction in required quantum shots was achieved when solving 3-regular MaxCut problems using a new QAOA framework. This performance leap surpasses previous methods, enabling comparable discrete-solution quality with fewer measurements, a feat previously unattainable under limited resource conditions. The MaxCut problem is a classical combinatorial optimisation problem where the goal is to partition the vertices of a graph into two disjoint sets such that the number of edges crossing the partition (the ‘cut’) is maximised. The 3-regular variant specifies that each vertex has exactly three edges connected to it, adding a specific structural constraint. QAOA is a promising algorithm for tackling such problems on near-term quantum computers, but its effectiveness is heavily reliant on efficient resource utilisation.
The team replaced the conventional expectation-based objective, which optimises the average performance of a solution, with an approach focused on the cut value of the most probable bitstring, directly targeting high-quality discrete outputs. Traditionally, QAOA aims to maximise the expected value of the objective function, averaging over all possible measurement outcomes. This new approach, however, prioritises finding the single, most likely solution and maximising its cut value. This shift in focus is crucial because, in practical applications, a single good solution is often sufficient, and the overhead of averaging over many samples can be substantial. Bayesian optimisation and adaptive shot allocation intelligently balance exploration and exploitation during parameter searches, improving both speed and accuracy. Bayesian optimisation, a probabilistic surrogate modelling technique, uses prior data from previous evaluations to build a model of the objective function. This model is then used to intelligently select the next set of parameters to evaluate, balancing the need to explore new regions of the parameter space with the need to exploit promising areas. In the context of limited quantum circuit executions, this is particularly effective as it minimises the number of costly quantum computations required to find optimal parameters. The algorithm constructs a posterior distribution over the parameters, representing the uncertainty in their optimal values, and uses an acquisition function to guide the search. Distributing measurement ‘shots’, the fundamental unit of quantum computation, intelligently, based on both confidence in the most likely solution and the variance of cut-value estimations, further enhanced the results. The number of shots allocated to each parameter setting directly impacts the accuracy of the estimated cut value. The framework employs a dual criterion: it prioritises parameters where the algorithm is confident in its prediction of the most probable bitstring, and also allocates more shots to parameters where the estimated cut value has high variance, indicating a need for further refinement.
The team conducted experiments on instances of 3-regular MaxCut, a complex optimisation problem, but the observed benefits do not yet translate to a guaranteed speed-up on all problem types, nor demonstrate scalability to significantly larger, real-world applications. The MaxCut problem serves as a benchmark for testing optimisation algorithms, but its applicability extends to areas like network design, machine learning, and logistics. Balancing speed and solution stability in MaxCut optimisation This new approach demonstrably improves efficiency for 3-regular MaxCut problems, but a tension between optimisation speed and stability is acknowledged. The mode-based objective creates a more uneven, potentially erratic field for the algorithm to navigate, particularly when multiple bitstrings achieve similar high probabilities. This is because focusing solely on the most probable bitstring can lead to rapid changes in the objective function as the algorithm explores the parameter space. If several bitstrings have comparable probabilities, small changes in the parameters can cause the algorithm to switch between them, leading to instability. This suggests the current results are more about refining the method of optimisation than guaranteeing improvements that will hold true as problems grow in complexity, though it does not invalidate the findings. The algorithm’s sensitivity to these fluctuations requires careful tuning of the optimisation parameters to ensure convergence. Further research is needed to investigate methods for mitigating this instability, such as incorporating a smoothing mechanism or using a more robust objective function. It is important to acknowledge that these gains are currently demonstrated on specific, relatively simple problems. The observed tension between optimisation speed and stability highlights a key area for further refinement, rather than negating the value of this new approach. Focusing optimisation on the most probable solution offers a demonstrably effective way to improve performance when resources are limited. This work demonstrates a shift in how quantum optimisation algorithms can be structured, moving beyond optimising average performance to prioritising the quality of the most likely solution. Comparable results to standard methods were achieved while using fewer quantum ‘shots’ by maximising the cut value of this single, best-guess outcome. This represents a major step towards practical quantum optimisation, particularly given the limited resources available on current quantum hardware. Current quantum computers, such as those based on superconducting qubits, are limited in the number of qubits and the coherence time, which restricts the size and complexity of problems that can be solved. Reducing the number of measurements required to achieve a given level of accuracy is therefore crucial for making quantum optimisation algorithms viable on near-term devices. The 30% reduction in shots represents a significant improvement in resource efficiency, potentially enabling the solution of larger and more complex problems. Future work will focus on extending this framework to other combinatorial optimisation problems and investigating its scalability to larger instances. The ultimate goal is to develop quantum optimisation algorithms that can outperform classical algorithms on real-world applications, providing a significant advantage in areas such as logistics, finance, and materials science. The researchers demonstrated a more resource-efficient approach to the quantum approximate optimisation algorithm, achieving comparable solution quality to standard methods on 3-regular MaxCut problems. By focusing on the cut value of the most probable solution, rather than average performance, the scheme required fewer quantum ‘shots’, a reduction of up to 30%, to reach a given accuracy. This is important because current quantum computers have limited resources, making efficient use of measurements crucial. The authors intend to extend this framework to other optimisation problems and explore its scalability to larger instances. 👉 More information 🗞 Resource-efficient quantum approximate optimization algorithm via Bayesian optimization and maximum-probability evaluation 🧠 ArXiv: https://arxiv.org/abs/2603.28413 Tags:
