@inproceedings{yang-etal-2025-reshaping,
    title = "Reshaping Representation Space to Balance the Safety and Over-rejection in Large Audio Language Models",
    author = "Yang, Hao  and
      Qu, Lizhen  and
      Shareghi, Ehsan  and
      Haffari, Gholamreza",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.510/",
    pages = "10078--10090",
    ISBN = "979-8-89176-332-6",
    abstract = "Large Audio Language Models (LALMs) have extended the capabilities of Large Language Models (LLMs) by enabling audio-based human interactions. However, recent research has revealed that LALMs remain vulnerable to harmful queries due to insufficient safety-alignment. Despite advances in defence measures for text and vision LLMs, effective safety-alignment strategies and audio-safety dataset specifically targeting LALMs are notably absent. Meanwhile defence measures based on Supervised Fine-tuning (SFT) struggle to address safety improvement while avoiding over-rejection issues, significantly compromising helpfulness. In this work, we propose an unsupervised safety-fine-tuning strategy as remedy that reshapes model{'}s representation space to enhance existing LALMs safety-alignment while balancing the risk of over-rejection. Our experiments, conducted across three generations of Qwen LALMs, demonstrate that our approach significantly improves LALMs safety under three modality input conditions (audio-text, text-only, and audio-only) while increasing over-rejection rate by only 0.88{\%} on average. Warning: this paper contains harmful examples."
}Markdown (Informal)
[Reshaping Representation Space to Balance the Safety and Over-rejection in Large Audio Language Models](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.510/) (Yang et al., EMNLP 2025)
ACL