Guo Dong Shi
2025
Can Large Language Models Translate Unseen Languages in Underrepresented Scripts?
Dianqing Lin
|
Aruukhan
|
Hongxu Hou
|
Shuo Sun
|
Wei Chen
|
Yichen Yang
|
Guo Dong Shi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have demonstrated impressive performance in machine translation, but still struggle with unseen low-resource languages, especially those written in underrepresented scripts. To investigate whether LLMs can translate such languages with the help of linguistic resources, we introduce Lotus, a benchmark designed to evaluate translation for Mongolian (in traditional script) and Yi. Our study shows that while linguistic resources can improve translation quality as measured by automatic metrics, LLMs remain limited in their ability to handle these languages effectively. We hope our work provides insights for the low-resource NLP community and fosters further progress in machine translation for underrepresented script low-resource languages. Our code and data are available.
Search
Fix author
Co-authors
- Aruukhan 1
- Wei Chen 1
- Hongxu Hou 1
- Dianqing Lin 1
- Shuo Sun 1
- show all...