Yunmeng Li


2025

pdf bib
Rubrik’s Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset
Diana Galvan-Sosa | Gabrielle Gaudeau | Pride Kavumba | Yunmeng Li | Hongyi Gu | Zheng Yuan | Keisuke Sakaguchi | Paula Buttery
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The performance and usability of Large-Language Models (LLMs) are driving their use in explanation generation tasks. However, despite their widespread adoption, LLM explanations have been found to be unreliable, making it difficult for users to distinguish good from bad explanations. To address this issue, we present Rubrik’s CUBE–an education-inspired rubric and a dataset of 26k explanations, written and later quality-annotated using the rubric by both humans and six open- and closed-source LLMs. The CUBE dataset focuses on two reasoning and two language tasks, providing the necessary diversity for us to effectively test our proposed rubric. Using Rubrik, we find that explanations are influenced by both task and perceived difficulty. Low quality stems primarily from a lack of conciseness in LLM-generated explanations, rather than cohesion and word choice. The full dataset, rubric, and code are available at https://github.com/RubriksCube/rubriks_cube.

pdf bib
MQM-Chat: Multidimensional Quality Metrics for Chat Translation
Yunmeng Li | Jun Suzuki | Makoto Morishita | Kaori Abe | Kentaro Inui
Proceedings of the 31st International Conference on Computational Linguistics

The complexities of chats, such as the stylized contents specific to source segments and dialogue consistency, pose significant challenges for machine translation. Recognizing the need for a precise evaluation metric to address the issues associated with chat translation, this study introduces Multidimensional Quality Metrics for Chat Translation (MQM-Chat), which encompasses seven error types, including three specifically designed for chat translations: ambiguity and disambiguation, buzzword or loanword issues, and dialogue inconsistency. In this study, human annotations were applied to the translations of chat data generated by five translation models. Based on the error distribution of MQM-Chat and the performance of relabeling errors into chat-specific types, we concluded that MQM-Chat effectively classified the errors while highlighting chat-specific issues explicitly. The results demonstrate that MQM-Chat can qualify both the lexical accuracy and semantical accuracy of translation models in chat translation tasks.

2023

pdf bib
An Investigation of Warning Erroneous Chat Translations in Cross-lingual Communication
Yunmeng Li | Jun Suzuki | Makoto Morishita | Kaori Abe | Kentaro Inui
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Student Research Workshop

2022

pdf bib
Chat Translation Error Detection for Assisting Cross-lingual Communications
Yunmeng Li | Jun Suzuki | Makoto Morishita | Kaori Abe | Ryoko Tokuhisa | Ana Brassard | Kentaro Inui
Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems