Yonghao Liu
2025
Structural Reward Model: Enhancing Interpretability, Efficiency, and Scalability in Reward Modeling
Xiaoyu Liu
|
Di Liang
|
Hongyu Shan
|
Peiyang Liu
|
Yonghao Liu
|
Muling Wu
|
Yuntao Li
|
Xianjie Wu
|
Li Miao
|
Jiangrong Shen
|
Minlong Peng
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Reward Models (RMs) are key components for evaluating and guiding language model outputs. However, traditional scalar RMs often struggle with incorporating contextual and background information during inference, leading to incomplete evaluations. Generative RMs (GRMs) attempt to address these limitations by generating intermediate reasoning steps. Yet, their uncontrolled black-box nature and inefficiency due to sequential decoding hinder their industrial deployment. Industrial scenarios, such as search and recommendation systems, often involve single-domain tasks requiring evaluation along specific dimensions. In such contexts, diagnosing “bad cases” necessitates structured feedback to identify and optimize dimension-specific issues.In this paper, we propose the Structural Reward Model (SRM), a modular and interpretable framework integrating side-branch models as auxiliary feature generators. By introducing fine-grained dimensions, SRMs enable interpretable and efficient evaluation, facilitating targeted diagnostics and optimization. This structured approach ensures adaptability and scalability for industrial applications.Through comprehensive experiments, we demonstrate that SRMs outperform scalar RMs and GRMs in robustness and alignment with human preferences. The modular design further supports efficient optimization for practical scenarios, allowing SRM to provide a practical reward modeling solution for industry.
2021
Deep Attention Diffusion Graph Neural Networks for Text Classification
Yonghao Liu
|
Renchu Guan
|
Fausto Giunchiglia
|
Yanchun Liang
|
Xiaoyue Feng
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Text classification is a fundamental task with broad applications in natural language processing. Recently, graph neural networks (GNNs) have attracted much attention due to their powerful representation ability. However, most existing methods for text classification based on GNNs consider only one-hop neighborhoods and low-frequency information within texts, which cannot fully utilize the rich context information of documents. Moreover, these models suffer from over-smoothing issues if many graph layers are stacked. In this paper, a Deep Attention Diffusion Graph Neural Network (DADGNN) model is proposed to learn text representations, bridging the chasm of interaction difficulties between a word and its distant neighbors. Experimental results on various standard benchmark datasets demonstrate the superior performance of the present approach.
Search
Fix author
Co-authors
- Xiaoyue Feng 1
- Fausto Giunchiglia 1
- Renchu Guan 1
- Yuntao Li 1
- Yanchun Liang 1
- show all...