Back to News
research

Lieprune Achieves over Compression of Quantum Neural Networks with Negligible Performance Loss for Machine Learning Tasks

Quantum Zeitgeist
Loading...
4 min read
1 views
0 likes
Lieprune Achieves over Compression of Quantum Neural Networks with Negligible Performance Loss for Machine Learning Tasks

Summarize this article with:

Quantum neural networks represent a promising avenue for near-term machine learning, but their potential is currently limited by the sheer number of parameters required and associated computational challenges. Haijian Shao, Bowen Yang, and Wei Liu, from Jiangsu University of Science and Technology, alongside Yingtao Jiang from the University of Nevada, Las Vegas, and colleagues, address this issue with LiePrune, a novel framework for dramatically simplifying these networks. LiePrune uniquely combines Lie group theory and geometric principles to identify and remove redundant parameters in a principled way, achieving significant compression without sacrificing performance.

The team demonstrates that their method not only compresses networks aggressively, but also offers provable guarantees regarding redundancy detection, functional approximation, and computational efficiency, representing a substantial step towards practical, scalable quantum machine learning. Near-term quantum machine learning faces limitations in scalability due to excessive parameters, barren plateaus, and hardware constraints.

This research introduces LiePrune, a mathematically grounded, one-shot structured pruning framework for quantum neural networks and parameterized quantum circuits that exploits Lie group structure and quantum geometric information. The method jointly represents each gate within a Lie group, Lie algebra dual space and a quantum geometric feature space, facilitating principled redundancy detection and aggressive compression. Experiments conducted on quantum classification tasks using the MNIST and FashionMNIST datasets, and quantum chemistry simulations of LiH Variational Quantum Eigensolver (VQE), demonstrate that LiePrune achieves over 10× compression. LiePrune Compresses Quantum Networks with Minimal Loss Scientists developed LiePrune, a novel framework for compressing quantum neural networks and parameterized quantum circuits, achieving substantial reductions in the number of parameters required for operation. Experiments demonstrate that LiePrune can compress models by a factor of 8 to 10 with minimal loss of accuracy on classification tasks such as MNIST and FashionMNIST. Specifically, on the MNIST 4-vs-9 dataset, the team reduced parameters from 288 to 36, while maintaining 95. 9% of the original accuracy after fine-tuning. Similar results were observed on the Fashion Sandal-vs-Boot dataset, where parameters were compressed from 360 to 36, achieving 74. 0% accuracy post-fine-tuning. The research team also investigated LiePrune’s performance on a quantum chemistry task, the LiH variational quantum eigensolver (VQE) problem, using a 12-qubit, 12-layer ansatz. LiePrune achieved a 12-fold compression, reducing parameters from 432 to 36, but this aggressive compression resulted in a significant increase in energy deviation, initially deteriorating from −7. 5225 Ha to −3. 7416 Ha. Subsequent fine-tuning partially recovered the ground state energy to −4. 2875 Ha, though a gap of 3. 23 Ha remained. Further analysis revealed that mild compression levels induced minimal energy deviations, fully recoverable with fine-tuning, while aggressive compression led to substantial errors. These results demonstrate that LiePrune effectively compresses quantum models for classification tasks, but chemically structured Hamiltonians are more sensitive to strong pruning, requiring specialized strategies to preserve accuracy. LiePrune Enables Scalable Quantum Circuit Compression LiePrune represents a significant advance in the development of practical quantum neural networks and parameterized quantum circuits. Researchers have created a mathematically grounded framework for efficiently pruning these circuits, addressing a key limitation in their scalability due to excessive parameters and computational demands. The method leverages the underlying Lie group structure of quantum circuits, allowing for aggressive compression while preserving functionality. Experiments across diverse tasks, including image classification and quantum chemistry simulations, demonstrate that LiePrune achieves substantial parameter reduction, over eight to twelve times, with minimal or even improved performance. This achievement stems from a novel approach to redundancy detection, representing each gate within a dual Lie group-Lie algebra space and a geometric feature space. However, chemically structured Hamiltonians exhibit greater sensitivity to compression than the classification benchmarks tested, suggesting that further refinements, such as chemistry-aware constraints, are needed to fully realize the benefits of LiePrune in this domain. 👉 More information 🗞 LiePrune: Lie Group and Quantum Geometric Dual Representation for One-Shot Structured Pruning of Quantum Neural Networks 🧠 ArXiv: https://arxiv.org/abs/2512.09469 Tags: Rohail T. As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world. Latest Posts by Rohail T.: Machine Learning Optimizes BEGe Detector Event Selection, Achieving Efficiency for 10 keV Radiation Detection December 12, 2025 Graph-based Bayesian Optimization Discovers Variational Quantum Circuits for Cybersecurity Data Analysis December 12, 2025 Cold Atoms Connect Single-Layer Models to Kondo Lattice Physics, Revealing RKKY Interactions December 12, 2025

Read Original

Tags

quantum-algorithms
quantum-communication

Source Information

Source: Quantum Zeitgeist