Fan Gao


2025

pdf bib
TDCSA: LLM-Guided Top-Down Approach for Robust Citation Sentiment Analysis
Fan Gao | Jieyang Peng | Xiaoming Tao | Wang Youzheng
Findings of the Association for Computational Linguistics: ACL 2025

Citation Sentiment Analysis (CSA) plays a crucial role in understanding academic influence and knowledge diffusion. While pre-trained language models (PLMs) and large language models (LLMs) showed remarkable success in general sentiment analysis, they encounter specialized challenges in CSA due to the less significant and implicit sentiment expressions in academic writing, as well as complex sentiment transitions. % importance & limitations In order to address the challenges, We propose TDCSA, a Top-Down framework that leverages LLMs’ semantic understanding capabilities to enhance PLM-based CSA, which transforms the traditional bottom-up feature engineering paradigm into a top-down architecture. % what we do Our framework consists of three key components: (1) a Dual LLM Feature Generation module for robust quadruple extraction, (2) a Multi-view Feature Representation mechanism for neutral citation processing, and (3) a Quad Feature Enhanced PLM. % how we do Experiments demonstrate that TDCSA significantly outperforms existing methods, achieving state-of-the-art performance while maintaining robustness to quadruple quality variations.

2024

pdf bib
Evaluating Large Language Models on Wikipedia-Style Survey Generation
Fan Gao | Hang Jiang | Rui Yang | Qingcheng Zeng | Jinghui Lu | Moritz Blum | Tianwei She | Yuang Jiang | Irene Li
Findings of the Association for Computational Linguistics: ACL 2024

Educational materials such as survey articles in specialized fields like computer science traditionally require tremendous expert inputs and are therefore expensive to create and update. Recently, Large Language Models (LLMs) have achieved significant success across various general tasks. However, their effectiveness and limitations in the education domain are yet to be fully explored. In this work, we examine the proficiency of LLMs in generating succinct survey articles specific to the niche field of NLP in computer science, focusing on a curated list of 99 topics. Automated benchmarks reveal that GPT-4 surpasses its predecessors, inluding GPT-3.5, PaLM2, and LLaMa2 by margins ranging from 2% to 20% in comparison to the established ground truth. We compare both human and GPT-based evaluation scores and provide in-depth analysis. While our findings suggest that GPT-created surveys are more contemporary and accessible than human-authored ones, certain limitations were observed. Notably, GPT-4, despite often delivering outstanding content, occasionally exhibited lapses like missing details or factual errors. At last, we compared the rating behavior between humans and GPT-4 and found systematic bias in using GPT evaluation.