RED: Unleashing Token-Level Rewards from Holistic Feedback via Reward Redistribution

Jiahui Li, Lin Li, Tai-Wei Chang, Kun Kuang, Long Chen, Jun Zhou, Cheng Yang


Abstract
Reinforcement learning from human feedback (RLHF) offers a promising approach to aligning large language models (LLMs) with human preferences. Typically, a reward model is trained or supplied to act as a proxy for humans in evaluating generated responses during the reinforcement training phase. However, current reward models operate as sequence-to-one models, allocating a single, sparse, and delayed reward to an entire output sequence. This approach may overlook the significant contributions of individual tokens toward the desired outcome. To this end, we propose a more fine-grained, token-level guidance approach for RL training. Specifically, we introduce RED, a novel REward reDistribition method that evaluates and assigns specific credit to each token using an off-the-shelf reward model. Utilizing these fine-grained rewards enhances the model’s understanding of language nuances, leading to more precise performance improvements. Notably, our method does not require modifying the reward model or introducing additional training steps, thereby incurring minimal computational costs. Experimental results across diverse datasets and tasks demonstrate the superiority of our approach.
Anthology ID:
2025.emnlp-main.252
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4993–5022
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.252/
DOI:
Bibkey:
Cite (ACL):
Jiahui Li, Lin Li, Tai-Wei Chang, Kun Kuang, Long Chen, Jun Zhou, and Cheng Yang. 2025. RED: Unleashing Token-Level Rewards from Holistic Feedback via Reward Redistribution. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 4993–5022, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
RED: Unleashing Token-Level Rewards from Holistic Feedback via Reward Redistribution (Li et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.252.pdf
Checklist:
 2025.emnlp-main.252.checklist.pdf