In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties

Nathan Roll, Calbert Graham, Yuka Tatsumi, Kim Tien Nguyen, Meghan Sumner, Dan Jurafsky


Abstract
Human listeners readily adjust to unfamiliar speakers and language varieties through exposure, but do these adaptation benefits extend to state-of-the-art spoken language models (SLMs)? We introduce a scalable framework that allows for in-context learning (ICL) in Phi-4 Multimodal (Phi-4-MM) using interleaved task prompts and audio-text pairs, and find that as few as 12 example utterances (~50 seconds) at inference time reduce word error rates by a relative 19.7% (1.2 pp.) on average across diverse English corpora. These improvements are most pronounced in low-resource varieties, when the context and target speaker match, and when more examples are provided—though scaling our procedure yields diminishing marginal returns to context length. Overall, we find that our novel ICL adaptation scheme (1) reveals a similar performance profile to human listeners, and (2) demonstrates consistent improvements to automatic speech recognition (ASR) robustness across diverse speakers and language backgrounds. While adaptation succeeds broadly, significant gaps remain for certain varieties, revealing where current models still fall short of human flexibility. We release our prompts and code on GitHub.
Anthology ID:
2025.emnlp-main.219
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4412–4426
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.219/
DOI:
Bibkey:
Cite (ACL):
Nathan Roll, Calbert Graham, Yuka Tatsumi, Kim Tien Nguyen, Meghan Sumner, and Dan Jurafsky. 2025. In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 4412–4426, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties (Roll et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.219.pdf
Checklist:
 2025.emnlp-main.219.checklist.pdf