Towards Large Language Model driven Reference-less Translation Evaluation for English and Indian Language

Vandan Mujadia, Pruthwik Mishra, Arafat Ahsan, Dipti M. Sharma


Abstract
With the primary focus on evaluating the effectiveness of large language models for automatic reference-less translation assessment, this work presents our experiments on mimicking human direct assessment to evaluate the quality of translations in English and Indian languages. We constructed a translation evaluation task where we performed zero-shot learning, in-context example-driven learning, and fine-tuning of large language models to provide a score out of 100, where 100 represents a perfect translation and 1 represents a poor translation. We compared the performance of our trained systems with existing methods such as COMET, BERT-Scorer, and LABSE, and found that the LLM-based evaluator (LLaMA2-13B) achieves a comparable or higher overall correlation with human judgments for the considered Indian language pairs (Refer figure 1).
Anthology ID:
2023.icon-1.28
Volume:
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2023
Address:
Goa University, Goa, India
Editors:
Jyoti D. Pawar, Sobha Lalitha Devi
Venue:
ICON
SIG:
SIGLEX
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
357–369
Language:
URL:
https://aclanthology.org/2023.icon-1.28
DOI:
Bibkey:
Cite (ACL):
Vandan Mujadia, Pruthwik Mishra, Arafat Ahsan, and Dipti M. Sharma. 2023. Towards Large Language Model driven Reference-less Translation Evaluation for English and Indian Language. In Proceedings of the 20th International Conference on Natural Language Processing (ICON), pages 357–369, Goa University, Goa, India. NLP Association of India (NLPAI).
Cite (Informal):
Towards Large Language Model driven Reference-less Translation Evaluation for English and Indian Language (Mujadia et al., ICON 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2023.icon-1.28.pdf