Domain-Specific Adaptation for ASR through Text-Only Fine-Tuning

Betty Kurian, Abhinav Upadhyay, Abhijeet Sengupta


Abstract
Speech recognition models often struggle in specialized domains due to the lack of domain-specific paired audio-text data, making it difficult to adapt general-purpose systems to unique terminology and linguistic patterns. In this work, we propose a text-only domain adaptation method for Whisper, fine-tuning only the decoder using domain-relevant text. Our approach introduces trainable cross-attention bias embeddings, extended with a gated mixture-of-experts routing mechanism, enabling the model to encode domain-specific linguistic priors without any audio data. Unlike ASR adaptation methods that require paired audio-text datasets, our approach is lightweight and resource-efficient. We observe up to a 56% relative improvement in word error rate over the baseline. Our findings demonstrate that text-only adaptation is a practical and effective strategy for improving speech recognition in specialized domains with limited or no domain-specific audio.
Anthology ID:
2025.mmloso-1.7
Volume:
Proceedings of the 1st Workshop on Multimodal Models for Low-Resource Contexts and Social Impact (MMLoSo 2025)
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Ankita Shukla, Sandeep Kumar, Amrit Singh Bedi, Tanmoy Chakraborty
Venues:
MMLoSo | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
78–85
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.mmloso-1.7/
DOI:
Bibkey:
Cite (ACL):
Betty Kurian, Abhinav Upadhyay, and Abhijeet Sengupta. 2025. Domain-Specific Adaptation for ASR through Text-Only Fine-Tuning. In Proceedings of the 1st Workshop on Multimodal Models for Low-Resource Contexts and Social Impact (MMLoSo 2025), pages 78–85, Mumbai, India. Association for Computational Linguistics.
Cite (Informal):
Domain-Specific Adaptation for ASR through Text-Only Fine-Tuning (Kurian et al., MMLoSo 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.mmloso-1.7.pdf