Data-Centric Improvements for Enhancing Multi-Modal Understanding in Spoken Conversation Modeling

Maximillian Chen, Ruoxi Sun, Sercan O Arik


Abstract
Conversational assistants are increasingly popular across diverse real-world applications, highlighting the need for advanced multimodal speech modeling. Speech, as a natural mode of communication, encodes rich user-specific characteristics such as speaking rate and pitch, making it critical for effective interaction. Our work introduces a data-centric customization approach for efficiently enhancing multimodal understanding in conversational speech modeling. Central to our contributions is a novel multi-task learning paradigm that involves designing auxiliary tasks to utilize a small amount of speech data. Our approach achieves state-of-the-art performance on the Spoken-SQuAD benchmark, using only 10% of the training data with open-weight models, establishing a robust and efficient framework for audio-centric conversational modeling. We also introduce ASK-QA, the first dataset for multi-turn spoken dialogue with ambiguous user requests and dynamic evaluation inputs.
Anthology ID:
2025.findings-acl.71
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1366–1387
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.71/
DOI:
Bibkey:
Cite (ACL):
Maximillian Chen, Ruoxi Sun, and Sercan O Arik. 2025. Data-Centric Improvements for Enhancing Multi-Modal Understanding in Spoken Conversation Modeling. In Findings of the Association for Computational Linguistics: ACL 2025, pages 1366–1387, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Data-Centric Improvements for Enhancing Multi-Modal Understanding in Spoken Conversation Modeling (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.71.pdf