Read to Hear: A Zero-Shot Pronunciation Assessment Using Textual Descriptions and LLMs

Yu-Wen Chen, Melody Ma, Julia Hirschberg


Abstract
Automatic pronunciation assessment is typically performed by acoustic models trained on audio-score pairs. Although effective, these systems provide only numerical scores, without the information needed to help learners understand their errors. Meanwhile, large language models (LLMs) have proven effective in supporting language learning, but their potential for assessing pronunciation remains unexplored. In this work, we introduce TextPA, a zero-shot, Textual description-based Pronunciation Assessment approach. TextPA utilizes human-readable representations of speech signals, which are fed into an LLM to assess pronunciation accuracy and fluency, while also providing reasoning behind the assigned scores. Finally, a phoneme sequence match scoring method is used to refine the accuracy scores. Our work highlights a previously overlooked direction for pronunciation assessment. Instead of relying on supervised training with audio-score examples, we exploit the rich pronunciation knowledge embedded in written text. Experimental results show that our approach is both cost-efficient and competitive in performance. Furthermore, TextPA significantly improves the performance of conventional audio-score-trained models on out-of-domain data by offering a complementary perspective.
Anthology ID:
2025.emnlp-main.134
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2682–2694
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.134/
DOI:
Bibkey:
Cite (ACL):
Yu-Wen Chen, Melody Ma, and Julia Hirschberg. 2025. Read to Hear: A Zero-Shot Pronunciation Assessment Using Textual Descriptions and LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 2682–2694, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Read to Hear: A Zero-Shot Pronunciation Assessment Using Textual Descriptions and LLMs (Chen et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.134.pdf
Checklist:
 2025.emnlp-main.134.checklist.pdf