Yuang Jiang
2024
Evaluating Large Language Models on Wikipedia-Style Survey Generation
Fan Gao
|
Hang Jiang
|
Rui Yang
|
Qingcheng Zeng
|
Jinghui Lu
|
Moritz Blum
|
Tianwei She
|
Yuang Jiang
|
Irene Li
Findings of the Association for Computational Linguistics ACL 2024
Educational materials such as survey articles in specialized fields like computer science traditionally require tremendous expert inputs and are therefore expensive to create and update. Recently, Large Language Models (LLMs) have achieved significant success across various general tasks. However, their effectiveness and limitations in the education domain are yet to be fully explored. In this work, we examine the proficiency of LLMs in generating succinct survey articles specific to the niche field of NLP in computer science, focusing on a curated list of 99 topics. Automated benchmarks reveal that GPT-4 surpasses its predecessors, inluding GPT-3.5, PaLM2, and LLaMa2 by margins ranging from 2% to 20% in comparison to the established ground truth. We compare both human and GPT-based evaluation scores and provide in-depth analysis. While our findings suggest that GPT-created surveys are more contemporary and accessible than human-authored ones, certain limitations were observed. Notably, GPT-4, despite often delivering outstanding content, occasionally exhibited lapses like missing details or factual errors. At last, we compared the rating behavior between humans and GPT-4 and found systematic bias in using GPT evaluation.
Search
Co-authors
- Fan Gao 1
- Hang Jiang 1
- Rui Yang 1
- Qingcheng Zeng 1
- Jinghui Lu 1
- show all...