LSRL: Process-Supervised GRPO on Latent Recurrent States Improves Mathematical Reasoning

Hangliang Ren


Abstract
Latent-recurrent language models solve tasks by iteratively refining hidden states rather than emitting chain-of-thought tokens, yet the opacity of those hidden trajectories hinders credit assignment and limits mathematical reasoning accuracy. We propose Latent-State Supervised Reinforcement Learning (LSRL), a process-supervised variant of Guided Reward Policy Optimization (GRPO) that delivers dense rewards at every latent step. We decode each recurrent depth of a 3.5-billion-parameter Huginn model and score the partial solutions with a GPT-4.1-nano grader aligned to final-answer correctness. Using LoRA adapters, we update the policy on a single NVIDIA L40S GPU with only 500 GSM-8K training problems. Relative to the depth-8 supervised Huginn baseline, LSRL improves absolute accuracy by +4.27 points on GSM-8K and +2.06 points on MathQA. These results demonstrate that rewarding latent steps provides an efficient route to stronger mathematical reasoning in latent-recurrent language models.
Anthology ID:
2025.findings-emnlp.669
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12534–12545
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.669/
DOI:
10.18653/v1/2025.findings-emnlp.669
Bibkey:
Cite (ACL):
Hangliang Ren. 2025. LSRL: Process-Supervised GRPO on Latent Recurrent States Improves Mathematical Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 12534–12545, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
LSRL: Process-Supervised GRPO on Latent Recurrent States Improves Mathematical Reasoning (Ren, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.669.pdf
Checklist:
 2025.findings-emnlp.669.checklist.pdf