Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening

Andre Wang He, Daniel Fried, Sean Welleck


Abstract
Reinforcement learning is emerging as a primary driver for improving language model reasoning capabilities. A fundamental question is whether current reinforcement learning algorithms—such as Group Relative Policy Optimization (GRPO), the de facto standard algorithm used to improve language model reasoning—merely sharpen the base model’s distribution around problems it can already solve. We investigate this question in the context of formal theorem proving, which has access to a perfect verifier. We identify a degenerate rank bias in GRPO in which highly probable trajectories are reinforced and rare ones are neglected. This results in distribution sharpening: the model can solve some problems with fewer samples, but underperforms simply sampling more solutions from the original model. To overcome GRPO’s rank bias we introduce unlikeliness reward, a simple method for explicitly up-weighting rare but correct solutions. We show that unlikeliness reward mitigates rank bias and improves pass@N across a large range of N in both synthetic and real theorem proving settings. We also uncover an unexpected link between rank bias and a seemingly mundane hyperparameter—the number of updates per batch—that leads to a second, complementary mitigation. We combine our insights into a revised GRPO training recipe for formal theorem proving, yielding an open pipeline that achieves competitive performance to DeepSeek-Prover-V1.5-RL on the miniF2F-test benchmark.
Anthology ID:
2025.emnlp-main.1298
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25559–25571
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1298/
DOI:
Bibkey:
Cite (ACL):
Andre Wang He, Daniel Fried, and Sean Welleck. 2025. Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 25559–25571, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening (He et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1298.pdf
Checklist:
 2025.emnlp-main.1298.checklist.pdf