Beijing University of Posts and Telecoms Finds QML Models Leak Data

Summarize this article with:
Researchers at Beijing University of Posts and Telecommunications have demonstrated that quantum machine learning (QML) models, contrary to previous assumptions, leak information about the data used to train them.
The team confirmed this “membership-privacy leakage” in both a “basic QNN” and a “hybrid QNN” through simulations and, crucially, by running tests on actual cloud quantum devices. This mirrors a known vulnerability in classical machine learning, raising new questions about data security in the emerging field of quantum computation. To prove the risk, the researchers developed a specialized “membership inference attack (MIA)” tailored to QNN outputs, showing how to identify if a specific data point influenced a model’s training; this work provides a potential path toward privacy-preserving QML.
Membership Inference Attacks Reveal QNN Training Data Leakage Quantum neural networks, once theorized to offer inherent privacy advantages, are demonstrably vulnerable to revealing information about their training data, according to new research from Beijing University of Posts and Telecommunications (BUPT). Researchers led by Fei Gao have shown that these models leak “membership privacy,” meaning an attacker can determine if a specific data point was used during the training process. This mirrors a well-known vulnerability in classical machine learning, but represents a first-of-its-kind demonstration in the quantum realm.
The team’s work, published in Physics Applied, moves beyond theoretical risk assessments by confirming this leakage through both simulations and experiments on actual cloud quantum devices. The study addresses the critical question of whether QML models leak membership privacy about their training data and whether methods exist to mitigate this leakage. Further investigation explored the potential for “quantum machine unlearning (QMU),” a framework comprising three mechanisms designed to remove the influence of withdrawn data from the trained model while preserving accuracy for data that remains. Evaluations on both QNN architectures showed that QMU effectively removes the influence of the withdrawn data. The researchers also examined how the number of “shots” used in quantum measurement impacts both the extent of membership leakage and the stability of the unlearning process, providing a potential path toward “privacy-preserving QML,” as stated in the published research.
Quantum Machine Unlearning Framework with Three MU Mechanisms This leakage, where an attacker can determine if specific data influenced model training, prompted the team to investigate whether QML inherently offered more privacy than its classical counterparts; their findings suggest it does not. Evaluations across both QNN architectures revealed that QMU successfully eliminates traces of the removed data while maintaining accuracy on the data the model retained, a critical balance for practical applications. A comparative analysis characterized these three mechanisms, assessing their performance based on data dependence, computational demands, and overall robustness.
Shot Count Impacts Leakage & Stability in Quantum Measurements Their work, detailed in Physics Applied, reveals that increasing the shot count doesn’t necessarily improve security; instead, it can paradoxically enhance the ability of attackers to infer training data membership. This finding challenges the assumption that more data inherently equates to greater privacy in quantum systems, a concept previously held in classical machine learning.
The team’s analysis focused on how quantum constraints specifically shape this leakage, leading to the formalization of a “realistic gray-box threat model” to assess the risks. Further complicating matters, the study revealed that shot count also significantly impacts the stability of quantum machine unlearning (QMU), the process of removing data from a trained model. Specifically, the researchers found that the effectiveness of QMU mechanisms, designed to mitigate membership-privacy leakage, is sensitive to the number of shots used during quantum measurement, with certain configurations proving more robust than others. Source: http://link.aps.org/doi/10.1103/mdfc-pgsl Tags: The Neuron With a keen intuition for emerging technologies, The Neuron brings over 5 years of deep expertise to the AI conversation. Coming from roots in software engineering, they've witnessed firsthand the transformation from traditional computing paradigms to today's ML-powered landscape. Their hands-on experience implementing neural networks and deep learning systems for Fortune 500 companies has provided unique insights that few tech writers possess. From developing recommendation engines that drive billions in revenue to optimizing computer vision systems for manufacturing giants, The Neuron doesn't just write about machine learning—they've shaped its real-world applications across industries. Having built real systems that are used across the globe by millions of users, that deep technological bases helps me write about the technologies of the future and current. Whether that is AI or Quantum Computing. Latest Posts by The Neuron: Alice & Bob’s $50M Lab Fuels Quantum Computing Scale-Up April 16, 2026 Fermi MOAT Team Develops Particle Science With AI Models April 16, 2026 Quantum ML: Temporal Power with Spiking Networks April 16, 2026
