SurveyGen: Quality-Aware Scientific Survey Generation with Large Language Models

Tong Bao, Mir Tafseer Nayeem, Davood Rafiei, Chengzhi Zhang


Abstract
Automatic survey generation has emerged as a key task in scientific document processing. While large language models (LLMs) have shown promise in generating survey texts, the lack of standardized evaluation datasets critically hampers rigorous assessment of their performance against human-written surveys. In this work, we present SurveyGen, a large-scale dataset comprising over 4,200 human-written surveys across diverse scientific domains, along with 242,143 cited references and extensive quality-related metadata for both the surveys and the cited papers. Leveraging this resource, we build QUAL-SG, a novel quality-aware framework for survey generation that enhances the standard Retrieval-Augmented Generation (RAG) pipeline by incorporating quality-aware indicators into literature retrieval to assess and select higher-quality source papers. Using this dataset and framework, we systematically evaluate state-of-the-art LLMs under varying levels of human involvement—from fully automatic generation to human-guided writing. Experimental results and human evaluations show that while semi-automatic pipelines can achieve partially competitive outcomes, fully automatic survey generation still suffers from low citation quality and limited critical analysis.
Anthology ID:
2025.emnlp-main.136
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2712–2736
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.emnlp-main.136/
DOI:
10.18653/v1/2025.emnlp-main.136
Bibkey:
Cite (ACL):
Tong Bao, Mir Tafseer Nayeem, Davood Rafiei, and Chengzhi Zhang. 2025. SurveyGen: Quality-Aware Scientific Survey Generation with Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 2712–2736, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
SurveyGen: Quality-Aware Scientific Survey Generation with Large Language Models (Bao et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.emnlp-main.136.pdf
Checklist:
 2025.emnlp-main.136.checklist.pdf