Back to News
quantum-computing

Llm-powered Attacks Advance Android Malware Evasion, Achieving 97% Detection Bypass

Quantum Zeitgeist
Loading...
4 min read
0 likes
⚡ Quantum Brief
French researchers at Université Paris Cité developed LAMLAD, a dual-agent LLM framework that evades Android malware detection with 97% success by subtly altering malicious code while preserving functionality. The system uses two LLMs—a manipulator to modify malware features and an analyzer to optimize evasion—enhanced by retrieval-augmented generation for contextual precision. Experiments show LAMLAD bypasses real-world detectors with minimal attempts, exposing critical vulnerabilities in machine learning-based security systems. The team also proposed an adversarial training defense, significantly improving classifier resilience by incorporating LAMLAD-generated samples into model training. This work highlights the escalating arms race between AI-driven attacks and defenses, urging updates to mobile security frameworks against evolving LLM-powered threats.
Llm-powered Attacks Advance Android Malware Evasion, Achieving 97% Detection Bypass

Summarize this article with:

The increasing sophistication of Android malware presents a constant challenge to detection systems, prompting researchers to explore the vulnerabilities of machine learning-based defenses. Tianwei Lan and Farid Naït-Abdesselam, from Université Paris Cité in France, along with their colleagues, now demonstrate a powerful new method for circumventing these systems, leveraging the capabilities of large language models. Their work introduces LAMLAD, a novel framework that generates subtle, yet effective, alterations to malware characteristics, successfully evading detection while maintaining malicious functionality.

This research is significant because LAMLAD achieves remarkably high success rates in attacking real-world malware detectors, and importantly, the team also proposes a defence strategy that substantially improves system resilience against this new class of threat, offering a crucial step towards more robust mobile security. LLMs Enhance Android Malware Detection Robustness The increasing sophistication of Android malware poses a continuous challenge to detection systems, leading researchers to investigate the weaknesses of machine learning-based defenses.

Scientists have now demonstrated a powerful new method for circumventing these systems, leveraging the capabilities of large language models. Their work introduces LAMLAD, a novel framework that generates subtle, yet effective, alterations to malware characteristics, successfully evading detection while maintaining malicious functionality.

This research is significant because LAMLAD achieves remarkably high success rates in attacking real-world malware detectors, and the team also proposes a defence strategy that substantially improves system resilience against this new class of threat, offering a crucial step towards more robust mobile security. LLMs Evade Android Malware Detection Successfully Scientists have developed LAMLAD, a novel framework that leverages the power of large language models (LLMs) to bypass machine learning-based Android malware detectors. This work addresses the growing threat of sophisticated Android malware and the vulnerabilities of current detection systems to adversarial attacks, where malware is subtly altered to evade identification. The core of LAMLAD is a dual-agent architecture, comprising an LLM manipulator and an LLM analyzer, working in concert to craft evasive malware samples. The LLM manipulator generates realistic, functionality-preserving changes to malware features, while the LLM analyzer assesses these modifications and guides the process towards successful evasion, ensuring the altered malware remains operational. To enhance efficiency and contextual understanding, the team integrated retrieval-augmented generation (RAG) into the LLM pipeline, allowing the system to draw upon relevant information during the attack process. Experiments focused on commonly used malware analysis techniques, enabling stealthy attacks against widely deployed detection systems.

Results demonstrate that LAMLAD achieves a high attack success rate, requiring minimal attempts to bypass security measures, highlighting its practical effectiveness. Recognizing the potential impact of this research, the team also proposed and evaluated an adversarial training-based defense strategy, which successfully reduces the attack success rate, substantially improving the robustness of malware classifiers. LLM-Powered Evasion of Android Malware Detection This research presents LAMLAD, a new framework for evaluating the vulnerabilities of machine learning-based Android malware detection systems, and demonstrates its ability to bypass these systems using the capabilities of large language models.

The team developed a dual-agent system, where one language model creates realistic alterations to malware features and another guides the process to successfully evade detection, all while preserving the malware’s core functionality. Integrating a retrieval-augmented generation technique further improves the efficiency and contextual awareness of the system, allowing it to operate effectively. Evaluations against established malware detectors reveal LAMLAD achieves a high success rate in adversarial attacks, highlighting its practical effectiveness. Importantly, the researchers also investigated potential countermeasures, successfully demonstrating that training models with examples generated by LAMLAD can significantly improve robustness against this type of attack. 👉 More information 🗞 LLM-Driven Feature-Level Adversarial Attacks on Android Malware Detectors 🧠 ArXiv: https://arxiv.org/abs/2512.21404 Tags: Rohail T. As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world. Latest Posts by Rohail T.: Random Controlled Differential Equations Enable Efficient Training for Time-Series Analysis January 7, 2026 Chern Insulators Achieve Unprecedented States with Chern Numbers up to 7 January 7, 2026 Agentic XAI Achieves 33% Better Explanations, Boosting Trust in AI Predictions January 7, 2026

Read Original

Tags

aerospace-defense

Source Information

Source: Quantum Zeitgeist