Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models

Rui Hu, Delai Qiu, Shuyu Wei, Jiaming Zhang, Yining Wang, Shengping Liu, Jitao Sang


Abstract
Omnimodal Large Language Models (OLLMs) have shown significant progress in integrating vision and text, but still struggle with integrating vision and audio, often exhibiting suboptimal performance when processing audio queries compared to text queries. This disparity is primarily due to insufficient alignment between vision and audio modalities during training, leading to inadequate attention to visual information when using audio queries. To mitigate this issue, we propose a Self-Knowledge Distillation (Self-KD) training method where the vision-text component of the OLLM serves as the teacher and the vision-audio component as the student. This enables the model to process audio in a manner analogous to its text processing. Our experimental results demonstrate that Self-KD is an effective method for enhancing the vision-audio capabilities of OLLMs by learning from the vision-text components, which subsequently improves the interaction between audio and images and results in improved performance on multimodal tasks.
Anthology ID:
2025.findings-acl.389
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7452–7463
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.389/
DOI:
Bibkey:
Cite (ACL):
Rui Hu, Delai Qiu, Shuyu Wei, Jiaming Zhang, Yining Wang, Shengping Liu, and Jitao Sang. 2025. Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 7452–7463, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models (Hu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.389.pdf