ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation

Kaiyuan Liu, Youcheng Pan, Yang Xiang, Daojing He, Jing Li, Yexing Du, Tianrun Gao


Abstract
Recently, LLM agents have made rapid progress in improving their programming capabilities. However, existing benchmarks lack the ability to automatically evaluate from users’ perspective, and also lack the explainability of the results of LLM agents’ code generation capabilities. Thus, we introduce ProjectEval, a new benchmark for LLM agents project-level code generation’s automated evaluation by simulating user interaction. ProjectEval is constructed by LLM with human reviewing. It has three different level inputs of natural languages or code skeletons. ProjectEval can evaluate the generated projects by user interaction simulation for execution, and by code similarity through existing objective indicators. Through ProjectEval, we find that systematic engineering project code, overall understanding of the project and comprehensive analysis capability are the keys for LLM agents to achieve practical projects. Our findings and benchmark provide valuable insights for developing more effective programming agents that can be deployed in future real-world production.
Anthology ID:
2025.findings-acl.1036
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20205–20221
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1036/
DOI:
Bibkey:
Cite (ACL):
Kaiyuan Liu, Youcheng Pan, Yang Xiang, Daojing He, Jing Li, Yexing Du, and Tianrun Gao. 2025. ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 20205–20221, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
ProjectEval: A Benchmark for Programming Agents Automated Evaluation on Project-Level Code Generation (Liu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1036.pdf