Back to News
quantum-computing

Attention in Krylov Space

arXiv Quantum Physics
Loading...
3 min read
0 likes
⚡ Quantum Brief
Researchers Zihao Qi and Christopher Earls propose a transformer-based model to predict Lanczos coefficients—key to operator growth in quantum systems—by treating them as causal time sequences, addressing longstanding numerical instability and memory limitations. The model autoregressively forecasts future coefficients from short prefixes, outperforming traditional asymptotic fits by an order of magnitude in both coefficient extrapolation and physical observable reconstruction for classical and quantum systems. A breakthrough in scalability emerges: the model generalizes across system sizes, requiring training only on smaller systems before accurate extrapolation to larger ones without retraining, reducing computational overhead. Attention mechanism analysis reveals history-dependent structures in coefficients previously missed by asymptotic methods, with targeted ablations pinpointing the most influential past data for precise predictions. This work bridges machine learning and quantum dynamics, offering a data-driven alternative to analytical approximations while preserving physical interpretability through attention pattern probing.
Attention in Krylov Space

Summarize this article with:

Quantum Physics arXiv:2601.07937 (quant-ph) [Submitted on 12 Jan 2026] Title:Attention in Krylov Space Authors:Zihao Qi, Christopher Earls View a PDF of the paper titled Attention in Krylov Space, by Zihao Qi and 1 other authors View PDF HTML (experimental) Abstract:The Universal Operator Growth Hypothesis formulates time evolution of operators through Lanczos coefficients. In practice, however, numerical instability and memory cost limit the number of coefficients that can be computed exactly. In response to these challenges, the standard approach relies on fitting early coefficients to asymptotic forms, but such procedures can miss subleading, history-dependent structures in the coefficients that subsequently affect reconstructed observables. In this work, we treat the Lanczos coefficients as a causal time sequence and introduce a transformer-based model to autoregressively predict future Lanczos coefficients from short prefixes. For both classical and quantum systems, our machine-learning model outperforms asymptotic fits, in both coefficient extrapolation and physical observable reconstruction, by achieving an order-of-magnitude reduction in error. Our model also transfers across system sizes: it can be trained on smaller systems and then be used to extrapolate coefficients on a larger system without retraining. By probing the learned attention patterns and performing targeted attention ablations, we identify which portions of the coefficient history are most influential for accurate forecasts. Comments: Subjects: Quantum Physics (quant-ph); Statistical Mechanics (cond-mat.stat-mech) Cite as: arXiv:2601.07937 [quant-ph] (or arXiv:2601.07937v1 [quant-ph] for this version) https://doi.org/10.48550/arXiv.2601.07937 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Zihao Qi [view email] [v1] Mon, 12 Jan 2026 19:07:22 UTC (429 KB) Full-text links: Access Paper: View a PDF of the paper titled Attention in Krylov Space, by Zihao Qi and 1 other authorsView PDFHTML (experimental)TeX Source view license Current browse context: quant-ph new | recent | 2026-01 Change to browse by: cond-mat cond-mat.stat-mech References & Citations INSPIRE HEP NASA ADSGoogle Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) Links to Code Toggle Papers with Code (What is Papers with Code?) ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)

Read Original

Tags

government-funding
quantum-algorithms
quantum-investment

Source Information

Source: arXiv Quantum Physics