Linchen Xiao


2025

pdf bib
Are Your LLMs Capable of Stable Reasoning?
Junnan Liu | Hongwei Liu | Linchen Xiao | Ziyi Wang | Kuikun Liu | Songyang Gao | Wenwei Zhang | Songyang Zhang | Kai Chen
Findings of the Association for Computational Linguistics: ACL 2025

The rapid advancement of large language models (LLMs) has shown remarkable progress in complex reasoning tasks. However, a significant disparity exists between benchmark performances and real-world applications. We attribute this gap primarily to current evaluation protocols and metrics, which inadequately capture the full spectrum of LLM capabilities, especially in complex reasoning tasks where both accuracy and consistency are essential. In this paper, we introduce **G-Pass@**k, a novel evaluation metric that continuously assesses model performance across multiple sampling attempts, quantifying both the model’s performance potential and its stability. Through extensive experiments on various public and newly constructed benchmarks, we employ G-Pass@k in conjunction with state-of-the-art large language models to provide comprehensive insights into their potential capabilities and operational consistency. Our findings reveal a significant opportunity to enhance the realistic reasoning abilities of LLMs, underscoring the necessity for more robust evaluation metrics.