Kaveh Hassani
2026
How to Make LMs Strong Node Classifiers?
Zhe Xu | Kaveh Hassani | Si Zhang | Hanqing Zeng | Michihiro Yasunaga | Limei Wang | Dongqi Fu | Ning Yao | Bo Long | Hanghang Tong
Findings of the Association for Computational Linguistics: EACL 2026
Zhe Xu | Kaveh Hassani | Si Zhang | Hanqing Zeng | Michihiro Yasunaga | Limei Wang | Dongqi Fu | Ning Yao | Bo Long | Hanghang Tong
Findings of the Association for Computational Linguistics: EACL 2026
Language Models (LMs) are increasingly challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs), in graph learning tasks. Following this trend, we propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks, without requiring any architectural modifications. By preserving the LM’s original architecture, our approach retains a key benefit of LM instruction tuning: the ability to jointly train on diverse datasets, fostering greater flexibility and efficiency. To achieve this, we introduce two key augmentation strategies: (1) Enriching LMs’ input using topological and semantic retrieval methods, which provide richer contextual information, and (2) guiding the LMs’ classification process through a lightweight GNN classifier that effectively prunes class candidates. Our experiments on real-world datasets show that backbone Flan-T5 LMs equipped with these augmentation strategies outperform SOTA text-output node classifiers and are comparable to top-performing vector-output node classifiers. By bridging the gap between specialized node classifiers and general LMs, this work paves the way for more versatile and widely applicable graph learning models. We will open-source the code upon publication.
Imbalanced Gradients in RL Post-Training of Multi-Task LLMs
Runzhe Wu | Ankur Samanta | Ayush Jain | Scott Fujimoto | Jeongyeol Kwon | Ben Kretzu | Youliang Yu | Kaveh Hassani | Boris Vidolov | Yonathan Efroni
Findings of the Association for Computational Linguistics: EACL 2026
Runzhe Wu | Ankur Samanta | Ayush Jain | Scott Fujimoto | Jeongyeol Kwon | Ben Kretzu | Youliang Yu | Kaveh Hassani | Boris Vidolov | Yonathan Efroni
Findings of the Association for Computational Linguistics: EACL 2026
Multi-task post-training of large language models (LLMs) is typically performed by mixing datasets from different tasks and optimizing them jointly. This approach implicitly assumes that all tasks contribute gradients of similar magnitudes; when this assumption fails, optimization becomes biased toward large-gradient tasks. In this paper, however, we show that this assumption fails in RL post-training: certain tasks produce significantly larger gradients, thus biasing updates toward those tasks. Such gradient imbalance would be justified only if larger gradients implied larger learning gains on the tasks (i.e., larger performance improvements)—but we find this is not true. Large-gradient tasks can achieve similar or even much lower learning gains than small-gradient ones. Further analyses reveal that these gradient imbalances cannot be explained by typical training statistics such as training rewards or advantages, suggesting that they arise from the *inherent* differences between tasks. This cautions against naive dataset mixing and calls for future work on principled gradient-level corrections for LLMs.