Accelerating Feedback-based Algorithms for Quantum Optimization Using Gradient Descent

Summarize this article with:
Quantum Physics arXiv:2602.12387 (quant-ph) [Submitted on 12 Feb 2026] Title:Accelerating Feedback-based Algorithms for Quantum Optimization Using Gradient Descent Authors:Masih Mozakka, Mohsen Heidari View a PDF of the paper titled Accelerating Feedback-based Algorithms for Quantum Optimization Using Gradient Descent, by Masih Mozakka and 1 other authors View PDF HTML (experimental) Abstract:Feedback-based methods have gained significant attention as an alternative training paradigm for the Quantum Approximate Optimization Algorithm (QAOA) in solving combinatorial optimization problems such as MAX-CUT. In particular, Quantum Lyapunov Control (QLC) employs feedback-driven control laws that guarantee monotonic non-decreasing objective values, can substantially reduce the training overhead of QAOA, and mitigate barren plateaus. However, these methods might require long control sequences, leading to sub-optimal convergence rates. In this work, we propose a hybrid method that incorporates per-layer gradient estimation to accelerate the convergence of QLC while preserving its low training overhead and stability guarantees. By leveraging layer-wise gradient information, the proposed approach selects near-optimal control parameters, resulting in significantly faster convergence and improved robustness. We validate the effectiveness of the method through extensive numerical experiments across a range of problem instances and optimization settings. Comments: Subjects: Quantum Physics (quant-ph); Machine Learning (cs.LG) Cite as: arXiv:2602.12387 [quant-ph] (or arXiv:2602.12387v1 [quant-ph] for this version) https://doi.org/10.48550/arXiv.2602.12387 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Masih Mozakka [view email] [v1] Thu, 12 Feb 2026 20:30:53 UTC (1,365 KB) Full-text links: Access Paper: View a PDF of the paper titled Accelerating Feedback-based Algorithms for Quantum Optimization Using Gradient Descent, by Masih Mozakka and 1 other authorsView PDFHTML (experimental)TeX Source view license Current browse context: quant-ph new | recent | 2026-02 Change to browse by: cs cs.LG References & Citations INSPIRE HEP NASA ADSGoogle Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) Links to Code Toggle Papers with Code (What is Papers with Code?) ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
