MTQ-Eval: Multilingual Text Quality Evaluation for Language Models

Rhitabrat Pokharel, Ameeta Agrawal


Abstract
The use of large language models (LLMs) for evaluating outputs is becoming an increasingly effective and scalable approach. However, it remains uncertain whether this capability extends beyond task-specific evaluations to more general assessments of text quality, particularly in multilingual contexts. In this study, we introduce – MTQ-Eval – a novel framework for multilingual text quality evaluation. We automatically generate text quality preference data and train open-source base LLMs to align with ratings of high- and low-quality text. Our comprehensive evaluation across 115 languages demonstrates the improved performance of the proposed model. Additionally, we explore whether this enhanced ability to distinguish between high- and low-quality text translates to better performance in downstream tasks.
Anthology ID:
2025.findings-ijcnlp.79
Volume:
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Kentaro Inui, Sakriani Sakti, Haofen Wang, Derek F. Wong, Pushpak Bhattacharyya, Biplab Banerjee, Asif Ekbal, Tanmoy Chakraborty, Dhirendra Pratap Singh
Venue:
Findings
SIG:
Publisher:
The Asian Federation of Natural Language Processing and The Association for Computational Linguistics
Note:
Pages:
1289–1304
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.findings-ijcnlp.79/
DOI:
Bibkey:
Cite (ACL):
Rhitabrat Pokharel and Ameeta Agrawal. 2025. MTQ-Eval: Multilingual Text Quality Evaluation for Language Models. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 1289–1304, Mumbai, India. The Asian Federation of Natural Language Processing and The Association for Computational Linguistics.
Cite (Informal):
MTQ-Eval: Multilingual Text Quality Evaluation for Language Models (Pokharel & Agrawal, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.findings-ijcnlp.79.pdf