Jiaming He


2025

pdf bib
Evaluating Robustness of Large Audio Language Models to Audio Injection: An Empirical Study
Guanyu Hou | Jiaming He | Yinhang Zhou | Ji Guo | Yitong Qiao | Rui Zhang | Wenbo Jiang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Large Audio-Language Models (LALMs) are increasingly deployed in real-world applications, yet their robustness against malicious audio injection remains underexplored. To address this gap, this study systematically evaluates five leading LALMs across four attack scenarios: Audio Interference Attack, Instruction Following Attack, Context Injection Attack, and Judgment Hijacking Attack. We quantitatively assess their vulnerabilities and resilience using metrics: the Defense Success Rate, Context Robustness Score, and Judgment Robustness Index. The experiments reveal significant performance disparities, with no single model demonstrating consistent robustness across all attack types. Attack effectiveness is significantly influenced by the position of the malicious content, particularly when injected at the beginning of a sequence. Furthermore, our analysis uncovers a negative correlation between a model’s instruction-following capability and its robustness: models that strictly adhere to instructions tend to be more susceptible, whereas safety-aligned models exhibit greater resistance. To facilitate future research, this work introduces a comprehensive benchmark framework. Our findings underscore the critical need for integrating robustness into training pipelines and developing multi-modal defenses, ultimately facilitating the secure deployment of LALMs. The dataset used in this work is available on Hugging Face.