SSR: Alignment-Aware Modality Connector for Speech Language Models

Weiting Tan, Hirofumi Inaguma, Ning Dong, Paden D. Tomasello, Xutai Ma


Abstract
Fusing speech into a pre-trained language model (SpeechLM) usually suffers from the inefficient encoding of long-form speech and catastrophic forgetting of pre-trained text modality. We propose SSR (Segmented Speech Representation Connector) for better modality fusion. Leveraging speech-text alignments, our approach segments and compresses speech features to match the granularity of text embeddings. Additionally, we introduce a two-stage training pipeline that includes the distillation and fine-tuning phases to mitigate catastrophic forgetting. SSR outperforms existing mechanisms for speech-text modality fusion, consistently achieving better speech understanding (e.g., +10 accuracy on StoryCloze and +20 on Speech-MMLU) while preserving pre-trained text ability.
Anthology ID:
2025.iwslt-1.5
Volume:
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Antonis Anastasopoulos
Venues:
IWSLT | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
56–75
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.5/
DOI:
Bibkey:
Cite (ACL):
Weiting Tan, Hirofumi Inaguma, Ning Dong, Paden D. Tomasello, and Xutai Ma. 2025. SSR: Alignment-Aware Modality Connector for Speech Language Models. In Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025), pages 56–75, Vienna, Austria (in-person and online). Association for Computational Linguistics.
Cite (Informal):
SSR: Alignment-Aware Modality Connector for Speech Language Models (Tan et al., IWSLT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.5.pdf