Contrastive Learning for Task-Independent SpeechLLM-Pretraining

Maike Züfle, Jan Niehues


Abstract
Large language models (LLMs) excel in natural language processing but adapting these LLMs to speech processing tasks efficiently is not straightforward. Direct task-specific fine-tuning is limited by overfitting risks, data requirements, and computational costs. To address these challenges, we propose a scalable, two-stage training approach: (1) A task-independent speech pretraining stage using contrastive learning to align text and speech representations over all layers, followed by (2) a task-specific fine-tuning stage requiring minimal data. This approach outperforms traditional ASR pretraining and enables the model to surpass models specialized on speech translation and question answering while being trained on only 10% of the task-specific data.
Anthology ID:
2025.findings-acl.445
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8469–8490
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.445/
DOI:
10.18653/v1/2025.findings-acl.445
Bibkey:
Cite (ACL):
Maike Züfle and Jan Niehues. 2025. Contrastive Learning for Task-Independent SpeechLLM-Pretraining. In Findings of the Association for Computational Linguistics: ACL 2025, pages 8469–8490, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Contrastive Learning for Task-Independent SpeechLLM-Pretraining (Züfle & Niehues, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.445.pdf