Jianxiang Yu
2025
Can Large Language Models Act as Ensembler for Multi-GNNs?
Hanqi Duan
|
Yao Cheng
|
Jianxiang Yu
|
Yao Liu
|
Xiang Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, GNNs lack the inherent semantic understanding capability of rich textual node attributes, limiting their effectiveness in applications. On the other hand, we empirically observe that for existing GNN models, no one can consistently outperforms others across diverse datasets. In this paper, we study whether LLMs can act as an ensembler for multi-GNNs and propose the LensGNN model. The model first aligns multiple GNNs, mapping the representations of different GNNs into the same space. Then, through LoRA fine-tuning, it aligns the space between the GNN and the LLM, injecting graph tokens and textual information into LLMs. This allows LensGNN to ensemble multiple GNNs and take advantage of the strengths of LLM, leading to a deeper understanding of both textual semantic information and graph structural information. The experimental results show that LensGNN outperforms existing models. This research advances text-attributed graph ensemble learning by providing a robust and superior solution for integrating semantic and structural information. We provide our code and data here: https://github.com/AquariusAQ/LensGNN.
2024
Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis
Jianxiang Yu
|
Zichen Ding
|
Jiaqi Tan
|
Kangyang Luo
|
Zhenmin Weng
|
Chenghua Gong
|
Long Zeng
|
RenJing Cui
|
Chengcheng Han
|
Qiushi Sun
|
Zhiyong Wu
|
Yunshi Lan
|
Xiang Li
Findings of the Association for Computational Linguistics: EMNLP 2024
In recent years, the rapid increase in scientific papers has overwhelmed traditional review mechanisms, resulting in varying quality of publications. Although existing methods have explored the capabilities of Large Language Models (LLMs) for automated scientific reviewing, their generated contents are often generic or partial. To address the issues above, we introduce an automated paper reviewing framework SEA. It comprises of three modules: Standardization, Evaluation, and Analysis, which are represented by models SEA-S, SEA-E, and SEA-A, respectively. Initially, SEA-S distills data standardization capabilities of GPT-4 for integrating multiple reviews for a paper. Then, SEA-E utilizes standardized data for fine-tuning, enabling it to generate constructive reviews. Finally, SEA-A introduces a new evaluation metric called mismatch score to assess the consistency between paper contents and reviews. Moreover, we design a self-correction strategy to enhance the consistency. Extensive experimental results on datasets collected from eight venues show that SEA can generate valuable insights for authors to improve their papers.
Search
Fix author
Co-authors
- Xiang Li (李翔) 2
- Yao Cheng 1
- RenJing Cui 1
- Zichen Ding 1
- Hanqi Duan 1
- show all...