Spoken Conversational Agents with Large Language Models

Huck Yang, Andreas Stolcke, Larry P. Heck


Abstract
Spoken conversational agents are converging toward voice-native LLMs. This tutorial distills the path from cascaded ASR/NLU to end-to-end, retrieval-and vision-grounded systems. We frame adaptation of text LLMs to audio, cross-modal alignment, and joint speech–text training; review datasets, metrics, and robustness across accents; and compare design choices (cascaded vs. E2E, post-ASR correction, streaming). We link industrial assistants to current open-domain and task-oriented agents, highlight reproducible baselines, and outline open problems in privacy, safety, and evaluation. Attendees leave with practical recipes and a clear systems-level roadmap.
Anthology ID:
2025.emnlp-tutorials.3
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Valentina Pyatkin, Andreas Vlachos
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7–8
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.3/
DOI:
Bibkey:
Cite (ACL):
Huck Yang, Andreas Stolcke, and Larry P. Heck. 2025. Spoken Conversational Agents with Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 7–8, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Spoken Conversational Agents with Large Language Models (Yang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.3.pdf