2025
pdf
bib
abs
MMLU-ProX: A Multilingual Benchmark for Advanced Large Language Model Evaluation
Weihao Xuan
|
Rui Yang
|
Heli Qi
|
Qingcheng Zeng
|
Yunze Xiao
|
Aosong Feng
|
Dairui Liu
|
Yun Xing
|
Junjue Wang
|
Fan Gao
|
Jinghui Lu
|
Yuang Jiang
|
Huitao Li
|
Xin Li
|
Kunyu Yu
|
Ruihai Dong
|
Shangding Gu
|
Yuekang Li
|
Xiaofei Xie
|
Felix Juefei-Xu
|
Foutse Khomh
|
Osamu Yoshie
|
Qingyu Chen
|
Douglas Teodoro
|
Nan Liu
|
Randy Goebel
|
Lei Ma
|
Edison Marrese-Taylor
|
Shijian Lu
|
Yusuke Iwasawa
|
Yutaka Matsuo
|
Irene Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Existing large language model (LLM) evaluation benchmarks primarily focus on English, while current multilingual tasks lack parallel questions that specifically assess cross-lingual reasoning abilities. This dual limitation makes it challenging to assess LLMs’ performance in the multilingual setting comprehensively. To fill this gap, we introduce MMLU-ProX, a comprehensive benchmark covering 29 languages, built on an English benchmark. Each language version consists of 11,829 identical questions, enabling direct cross-lingual comparisons. Additionally, to meet efficient evaluation needs, we provide a lite version containing 658 questions per language. To ensure the high quality of MMLU-ProX, we employ a rigorous development process that involves multiple powerful LLMs for translation, followed by expert review to ensure accurate expression, consistent terminology, and cultural relevance. Building on this, we systematically evaluate 36 state-of-the-art LLMs, including reasoning-enhanced and multilingual-optimized LLMs. The results reveal significant disparities in the multilingual capabilities of LLMs: While they perform well in high-resource languages, their performance declines markedly in low-resource languages, particularly for African languages. Through MMLU-ProX, we aim to advance the development of more inclusive AI systems and promote equitable access to technology across global contexts.
pdf
bib
abs
ReAgent: Reversible Multi-Agent Reasoning for Knowledge-Enhanced Multi-Hop QA
Zhao Xinjie
|
Fan Gao
|
Xingyu Song
|
Yingjian Chen
|
Rui Yang
|
Yanran Fu
|
Yuyang Wang
|
Yusuke Iwasawa
|
Yutaka Matsuo
|
Irene Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Multi-hop question answering (QA) remains challenging, as solutions must reliably integrate and reconcile evidence from multiple sources without succumbing to error propagation. While large language models (LLMs) have achieved substantial improvements via chain-of-thought (CoT) prompting and retrieval-augmented generation, these methods typically adopt a forward-only workflow—early mistakes persist throughout inference, and contradictions discovered later cannot systematically trigger re-evaluation. To address this limitation, we present ReAgent, a reversible multi-agent reasoning framework. Specifically, ReAgent enables agents to backtrack to earlier valid states when conflicts arise, thereby isolating and rectifying flawed assumptions before they undermine subsequent reasoning. Our approach combines explicit local and global rollback protocols with modular role specialization, resulting in a flexible and error-tolerant pipeline. Empirical evaluation on three multi-hop QA benchmarks demonstrates consistent performance gains of approximately 6% over forward-only baselines, in addition to enhanced interpretability. These findings highlight the value of non-monotonic, backtracking-driven inference in complex QA scenarios and point to broader implications for multi-agent collaboration in knowledge-intensive tasks.
pdf
bib
abs
TLUE: A Tibetan Language Understanding Evaluation Benchmark
Fan Gao
|
Cheng Huang
|
Yutong Liu
|
Nyima Tashi
|
Xiangxiang Wang
|
Thupten Tsering
|
Ban Ma-bao
|
Renzeng Duojie
|
Gadeng Luosang
|
Rinchen Dongrub
|
Dorje Tashi
|
Xiao Feng Cd
|
Yongbin Yu
|
Hao Wang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models have made tremendous progress in recent years, but low-resource languages, like Tibetan, remain significantly underrepresented in their evaluation. Despite Tibetan being spoken by over seven million people, it has largely been neglected in the development and assessment of LLMs. To address this gap, we present a Tibetan Language Understanding Evaluation Benchmark, TLUE, which is also the first large-scale benchmark for measuring the proficiency of large language models in the Tibetan language. TLUE comprises two major components: a comprehensive multi-task understanding benchmark spanning 5 domains and 67 subdomains, and a safety benchmark encompassing 7 subdomains. Finally, we evaluate a diverse set of state-of-the-art LLMs. Experimental results demonstrate that most large language models perform below the random baseline, especially highlighting the considerable challenges they face in Tibetan language processing. TLUE provides a crucial foundation for advancing future research in Tibetan language understanding and highlights the importance of promoting greater inclusivity in the development of large language models.
pdf
bib
abs
TDCSA: LLM-Guided Top-Down Approach for Robust Citation Sentiment Analysis
Fan Gao
|
Jieyang Peng
|
Xiaoming Tao
|
Wang Youzheng
Findings of the Association for Computational Linguistics: ACL 2025
Citation Sentiment Analysis (CSA) plays a crucial role in understanding academic influence and knowledge diffusion. While pre-trained language models (PLMs) and large language models (LLMs) showed remarkable success in general sentiment analysis, they encounter specialized challenges in CSA due to the less significant and implicit sentiment expressions in academic writing, as well as complex sentiment transitions. % importance & limitations In order to address the challenges, We propose TDCSA, a Top-Down framework that leverages LLMs’ semantic understanding capabilities to enhance PLM-based CSA, which transforms the traditional bottom-up feature engineering paradigm into a top-down architecture. % what we do Our framework consists of three key components: (1) a Dual LLM Feature Generation module for robust quadruple extraction, (2) a Multi-view Feature Representation mechanism for neutral citation processing, and (3) a Quad Feature Enhanced PLM. % how we do Experiments demonstrate that TDCSA significantly outperforms existing methods, achieving state-of-the-art performance while maintaining robustness to quadruple quality variations.
2024
pdf
bib
abs
Evaluating Large Language Models on Wikipedia-Style Survey Generation
Fan Gao
|
Hang Jiang
|
Rui Yang
|
Qingcheng Zeng
|
Jinghui Lu
|
Moritz Blum
|
Tianwei She
|
Yuang Jiang
|
Irene Li
Findings of the Association for Computational Linguistics: ACL 2024
Educational materials such as survey articles in specialized fields like computer science traditionally require tremendous expert inputs and are therefore expensive to create and update. Recently, Large Language Models (LLMs) have achieved significant success across various general tasks. However, their effectiveness and limitations in the education domain are yet to be fully explored. In this work, we examine the proficiency of LLMs in generating succinct survey articles specific to the niche field of NLP in computer science, focusing on a curated list of 99 topics. Automated benchmarks reveal that GPT-4 surpasses its predecessors, inluding GPT-3.5, PaLM2, and LLaMa2 by margins ranging from 2% to 20% in comparison to the established ground truth. We compare both human and GPT-based evaluation scores and provide in-depth analysis. While our findings suggest that GPT-created surveys are more contemporary and accessible than human-authored ones, certain limitations were observed. Notably, GPT-4, despite often delivering outstanding content, occasionally exhibited lapses like missing details or factual errors. At last, we compared the rating behavior between humans and GPT-4 and found systematic bias in using GPT evaluation.