Learning to See through Sound: From VggCaps to Multi2Cap for Richer Automated Audio Captioning

Sangyeon Cho, Mingi Kim, Jinkwon Hwang, Jaehoon Go, Minuk Ma, Sunjae Yoon, Junyeong Kim


Abstract
Automated Audio Captioning (AAC) aims to generate natural language descriptions of audio content, enabling machines to interpret and communicate complex acoustic scenes. However, current AAC datasets often suffer from short and simplistic captions, limiting model expressiveness and semantic depth. To address this, we introduce **VggCaps**, a new multi-modal dataset that pairs audio with corresponding video and leverages large language models (LLMs) to generate rich, descriptive captions. VggCaps significantly outperforms existing benchmarks in caption length, lexical diversity, and human-rated quality. Furthermore, we propose **Multi2Cap**, a novel AAC framework that learns audio-visual representations through a AV-grounding module during pre-training and reconstructs visual semantics using audio alone at inference. This enables visually grounded captioning in audio-only scenarios. Experimental results on Clotho and AudioCaps demonstrate that Multi2Cap achieves state-of-the-art performance across multiple metrics, validating the effectiveness of cross-modal supervision and LLM-based generation in advancing AAC.
Anthology ID:
2025.emnlp-main.715
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14168–14186
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.715/
DOI:
Bibkey:
Cite (ACL):
Sangyeon Cho, Mingi Kim, Jinkwon Hwang, Jaehoon Go, Minuk Ma, Sunjae Yoon, and Junyeong Kim. 2025. Learning to See through Sound: From VggCaps to Multi2Cap for Richer Automated Audio Captioning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 14168–14186, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Learning to See through Sound: From VggCaps to Multi2Cap for Richer Automated Audio Captioning (Cho et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.715.pdf
Checklist:
 2025.emnlp-main.715.checklist.pdf