Abstract
This study assesses an index for measur-ing the pronunciation difficulty of sen-tences (henceforth, pronounceability) based on the normalized edit distance from a reference sentence to a transcrip-tion of learners’ pronunciation. Pro-nounceability should be examined when language teachers use a computer-assisted language learning system for pronunciation learning to maintain the motivation of learners. However, unlike the evaluation of learners’ pronunciation performance, previous research did not focus on pronounceability not only for English but also for Asian languages. This study found that the normalized edit distance was reliable but not valid. The lack of validity appeared to be because of an English test used for determining the proficiency of learners.- Anthology ID:
- W18-3717
- Volume:
- Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Yuen-Hsien Tseng, Hsin-Hsi Chen, Vincent Ng, Mamoru Komachi
- Venue:
- NLP-TEA
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 119–124
- Language:
- URL:
- https://aclanthology.org/W18-3717
- DOI:
- 10.18653/v1/W18-3717
- Cite (ACL):
- Katsunori Kotani and Takehiko Yoshimi. 2018. Assessment of an Index for Measuring Pronunciation Difficulty. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 119–124, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Assessment of an Index for Measuring Pronunciation Difficulty (Kotani & Yoshimi, NLP-TEA 2018)
- PDF:
- https://preview.aclanthology.org/fix-dup-bibkey/W18-3717.pdf