Zhaofeng Liu
2025
ELABORATION: A Comprehensive Benchmark on Human-LLM Competitive Programming
Xinwei Yang
|
Zhaofeng Liu
|
Chen Huang
|
Jiashuai Zhang
|
Tong Zhang
|
Yifan Zhang
|
Wenqiang Lei
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
While recent research increasingly emphasizes the value of human-LLM collaboration in competitive programming and proposes numerous empirical methods, a comprehensive understanding remains elusive due to the fragmented nature of existing studies and their use of diverse, application-specific human feedback. Thus, our work serves a three-fold purpose: First, we present the first taxonomy of human feedback consolidating the entire programming process, which promotes fine-grained evaluation. Second, we introduce ELABORATIONSET, a novel programming dataset specifically designed for human-LLM collaboration, meticulously annotated to enable large-scale simulated human feedback and facilitate cost-effective real human interaction studies. Third, we introduce ELABORATION, a novel benchmark to facilitate a thorough assessment of human-LLM competitive programming. With ELABORATION, we pinpoint strengthes and weaknesses of existing methods, thereby setting the foundation for furture improvement. Our dataset and code will be openly released.
Search
Fix author
Co-authors
- Chen Huang 1
- Wenqiang Lei 1
- Xinwei Yang 1
- Jiashuai Zhang 1
- Tong Zhang 1
- show all...
Venues
- acl1