Shannan Yan


Fixing paper assignments

  1. Please select all papers that do not belong to this person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
LAGCL4Rec: When LLMs Activate Interactions Potential in Graph Contrastive Learning for Recommendation
Leqi Zheng | Chaokun Wang | Canzhi Chen | Jiajun Zhang | Cheng Wu | Zixin Song | Shannan Yan | Ziyang Liu | Hongwei Li
Findings of the Association for Computational Linguistics: EMNLP 2025

A core barrier preventing recommender systems from reaching their full potential lies in the inherent limitations of user-item interaction data: (1) Sparse user-item interactions, making it difficult to learn reliable user preferences; (2) Traditional contrastive learning methods often treat negative samples as equally hard or easy, ignoring the informative semantic difficulty during training. (3) Modern LLM-based recommender systems, on the other hand, discard all negative feedback, leading to unbalanced preference modeling. To address these issues, we propose LAGCL4Rec, a framework leveraging Large Language Models to Activate interactions in Graph Contrastive Learning for Recommendation. Our approach operates through three stages: (i) Data-Level: augmenting sparse interactions with balanced positive and negative samples using LLM-enriched profiles; (ii) Rank-Level: assessing semantic difficulty of negative samples through LLM-based grouping for fine-grained contrastive learning; and (iii) Rerank-Level: reasoning over augmented historical interactions for personalized recommendations. Theoretical analysis proves that LAGCL4Rec achieves effective information utilization with minimal computational overhead. Experiments across multiple benchmarks confirm our method consistently outperforms state-of-the-art baselines.