Back to News
quantum-computing

Machine-learned, finite temperature Fermi-operator expansions suitable for GPUs and AI-hardware

arXiv Quantum Physics
Loading...
3 min read
0 likes
⚡ Quantum Brief
Researchers developed machine-learned finite-temperature Fermi-operator expansions using second-order spectral projection (SP2), achieving GPU-compatible electronic structure calculations without explicit diagonalization. The method maps recursive SP2 expansions onto deep neural network architectures, enabling optimization of expansion coefficients via machine learning for specific chemical potentials and temperatures. An affine rescaling strategy eliminates retraining needs when simulation parameters change, making the approach adaptable for dynamic systems while maintaining computational efficiency. Benchmark tests show a 10x speedup in single-particle density matrix calculations on GPUs compared to traditional diagonalization, targeting small-to-moderate matrix sizes. The framework leverages optimized matrix-matrix multiplication kernels, aligning with modern AI hardware and dense matrix processing units for accelerated quantum simulations.
Machine-learned, finite temperature Fermi-operator expansions suitable for GPUs and AI-hardware

Summarize this article with:

Quantum Physics arXiv:2605.08523 (quant-ph) [Submitted on 8 May 2026] Title:Machine-learned, finite temperature Fermi-operator expansions suitable for GPUs and AI-hardware Authors:Stanislaw Kowalski, Christian F. A. Negre, Anders M. N. Niklasson, Kipton Barros, Joshua Finkelstein View a PDF of the paper titled Machine-learned, finite temperature Fermi-operator expansions suitable for GPUs and AI-hardware, by Stanislaw Kowalski and 4 other authors View PDF Abstract:We present several finite-temperature recursive Fermi-operator expansion schemes based on the second-order spectral projection (SP2) method. Our approach builds on a previous observation that the electronic structure problem, as formulated through a recursive SP2 expansion, can be mapped onto the architecture of a deep neural network. Using this perspective, we generalize SP2 to finite electronic temperatures and construct machine learning models to determine optimized expansion coefficients. These coefficients are trained for a specified chemical potential and electronic temperature and are not available in closed analytical form. However, by employing an appropriate affine rescaling strategy to the Hamiltonian matrix, we eliminate the need to retrain the model during a simulation if the temperature and chemical potential change. Our approach avoids explicit diagonalization and relies solely on highly optimized matrix-matrix multiplication kernels. Compared to state-of-the-art diagonalization, we achieve an order-of-magnitude speedup in the single-particle finite-temperature density matrix calculation for small and moderately sized matrices on modern GPUs and dense matrix multiply units. Subjects: Quantum Physics (quant-ph) MSC classes: 81-08 ACM classes: J.2 Cite as: arXiv:2605.08523 [quant-ph] (or arXiv:2605.08523v1 [quant-ph] for this version) https://doi.org/10.48550/arXiv.2605.08523 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Stanislaw Kowalski [view email] [v1] Fri, 8 May 2026 22:15:11 UTC (1,010 KB) Full-text links: Access Paper: View a PDF of the paper titled Machine-learned, finite temperature Fermi-operator expansions suitable for GPUs and AI-hardware, by Stanislaw Kowalski and 4 other authorsView PDFTeX Source view license Current browse context: quant-ph new | recent | 2026-05 References & Citations INSPIRE HEP NASA ADSGoogle Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)

Read Original

Tags

quantum-chemistry

Source Information

Source: arXiv Quantum Physics