Lin Li
Other people with similar names: Lin Li
2025
RED: Unleashing Token-Level Rewards from Holistic Feedback via Reward Redistribution
Jiahui Li
|
Lin Li
|
Tai-Wei Chang
|
Kun Kuang
|
Long Chen
|
Jun Zhou
|
Cheng Yang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Reinforcement learning from human feedback (RLHF) offers a promising approach to aligning large language models (LLMs) with human preferences. Typically, a reward model is trained or supplied to act as a proxy for humans in evaluating generated responses during the reinforcement training phase. However, current reward models operate as sequence-to-one models, allocating a single, sparse, and delayed reward to an entire output sequence. This approach may overlook the significant contributions of individual tokens toward the desired outcome. To this end, we propose a more fine-grained, token-level guidance approach for RL training. Specifically, we introduce RED, a novel REward reDistribition method that evaluates and assigns specific credit to each token using an off-the-shelf reward model. Utilizing these fine-grained rewards enhances the model’s understanding of language nuances, leading to more precise performance improvements. Notably, our method does not require modifying the reward model or introducing additional training steps, thereby incurring minimal computational costs. Experimental results across diverse datasets and tasks demonstrate the superiority of our approach.
Search
Fix author
Co-authors
- Tai-Wei Chang 1
- Long Chen 1
- Kun Kuang 1
- Jiahui Li 1
- Cheng Yang 1
- show all...
- Jun Zhou 1