Xinwei Yang
2025
ELABORATION: A Comprehensive Benchmark on Human-LLM Competitive Programming
Xinwei Yang | Zhaofeng Liu | Chen Huang | Jiashuai Zhang | Tong Zhang | Yifan Zhang | Wenqiang Lei
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Xinwei Yang | Zhaofeng Liu | Chen Huang | Jiashuai Zhang | Tong Zhang | Yifan Zhang | Wenqiang Lei
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
While recent research increasingly emphasizes the value of human-LLM collaboration in competitive programming and proposes numerous empirical methods, a comprehensive understanding remains elusive due to the fragmented nature of existing studies and their use of diverse, application-specific human feedback. Thus, our work serves a three-fold purpose: First, we present the first taxonomy of human feedback consolidating the entire programming process, which promotes fine-grained evaluation. Second, we introduce ELABORATIONSET, a novel programming dataset specifically designed for human-LLM collaboration, meticulously annotated to enable large-scale simulated human feedback and facilitate cost-effective real human interaction studies. Third, we introduce ELABORATION, a novel benchmark to facilitate a thorough assessment of human-LLM competitive programming. With ELABORATION, we pinpoint strengthes and weaknesses of existing methods, thereby setting the foundation for furture improvement. Our dataset and code will be openly released.
Physics Reasoner: Knowledge-Augmented Reasoning for Solving Physics Problems with Large Language Models
Xinyu Pang | Ruixin Hong | Zhanke Zhou | Fangrui Lv | Xinwei Yang | Zhilong Liang | Bo Han | Changshui Zhang
Proceedings of the 31st International Conference on Computational Linguistics
Xinyu Pang | Ruixin Hong | Zhanke Zhou | Fangrui Lv | Xinwei Yang | Zhilong Liang | Bo Han | Changshui Zhang
Proceedings of the 31st International Conference on Computational Linguistics
Physics problems constitute a significant aspect of reasoning, necessitating complicated reasoning ability and abundant physics knowledge. However, existing large language models (LLMs) frequently fail due to a lack of knowledge or incorrect knowledge application. To mitigate these issues, we propose Physics Reasoner, a knowledge-augmented framework to solve physics problems with LLMs. Specifically, the proposed framework constructs a comprehensive formula set to provide explicit physics knowledge and utilizes checklists containing detailed instructions to guide effective knowledge application. Namely, given a physics problem, Physics Reasoner solves it through three stages: problem analysis, formula retrieval, and guided reasoning. During the process, checklists are employed to enhance LLMs’ self-improvement in the analysis and reasoning stages. Empirically, Physics Reasoner mitigates the issues of insufficient knowledge and incorrect application, achieving state-of-the-art performance on SciBench with an average accuracy improvement of 5.8%.
CANDY: Benchmarking LLMs’ Limitations and Assistive Potential in Chinese Misinformation Fact-Checking
Ruiling Guo | Xinwei Yang | Chen Huang | Tong Zhang | Yong Hu
Findings of the Association for Computational Linguistics: EMNLP 2025
Ruiling Guo | Xinwei Yang | Chen Huang | Tong Zhang | Yong Hu
Findings of the Association for Computational Linguistics: EMNLP 2025
The effectiveness of large language models (LLMs) to fact-check misinformation remains uncertain, despite their growing use. To this end, we present CANDY, a benchmark designed to systematically evaluate the capabilities and limitations of LLMs in fact-checking Chinese misinformation. Specifically, we curate a carefully annotated dataset of ~20k instances. Our analysis shows that current LLMs exhibit limitations in generating accurate fact-checking conclusions, even when enhanced with chain-of-thought reasoning and few-shot prompting. To understand these limitations, we develop a taxonomy to categorize flawed LLM-generated explanations for their conclusions and identify factual fabrication as the most common failure mode. Although LLMs alone are unreliable for fact-checking, our findings indicate their considerable potential to augment human performance when deployed as assistive tools in scenarios. Our dataset and code can be accessed at https://github.com/SCUNLP/CANDY.