Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning

Jean Vassoyan, Nathanaël Beau, Roman Plaud


Abstract
The ability to achieve long-term goals is a key challenge in the current development of large language models (LLMs). To address this, pre-trained LLMs can be fine-tuned with reinforcement learning (RL) to explore solutions that optimize a given goal. However, exploration with LLMs is difficult, as a balance has to be struck between discovering new solutions and staying close enough to the pre-trained model, so as not to degrade basic capabilities. This is typically controlled with a Kullback-Leibler (KL) penalty. In this paper, we investigate the exploration dynamics of a small language model on a simple arithmetic task. We show how varying degrees of pre-training influence exploration and demonstrate the importance of “critical tokens” which have a dramatic impact on the final outcome. Consequently, we introduce a simple modification to the KL penalty that favors exploration on critical tokens, increasing the efficiency of the RL fine-tuning stage.
Anthology ID:
2025.findings-naacl.340
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6108–6118
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.340/
DOI:
Bibkey:
Cite (ACL):
Jean Vassoyan, Nathanaël Beau, and Roman Plaud. 2025. Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6108–6118, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning (Vassoyan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.340.pdf