EQA-RM: A Generative Embodied Reward Model with Test-time Scaling

Yuhang Chen, Zhen Tan, Tianlong Chen


Abstract
Reward Models (RMs), vital for large model alignment, are underexplored for complex embodied tasks like Embodied Question Answering (EQA) where nuanced evaluation of agents’ spatial, temporal, and logical understanding is critical yet not considerred by generic approaches. We introduce EQA-RM, a novel generative multimodal reward model specifically architected for EQA, trained via our innovative Contrastive Group Relative Policy Optimization (C-GRPO) strategy to learn fine-grained behavioral distinctions. The generative nature of EQA-RM provides interpretable, structured reward feedback (beyond simple scalars), uniquely enabling test-time scaling to dynamically adjust evaluation granularity, from concise scores to detailed critiques of reasoning and grounding, at inference without retraining. Concurrently, we introduce EQARewardBench, a new benchmark built on OpenEQA for standardized EQA reward model assessment. Demonstrating high sample efficiency, EQA-RM (fine-tuning Qwen2-VL-2B-Instruct) achieves 61.9% accuracy on EQA-RM-Bench with 700 samples, outperforming strong proprietary baselines, including Gemini-2.5-Flash, GPT-4o, Claude-3.5-Haiku, and open-sourced state-of-the-art models such as RoVRM and VisualPRM.
Anthology ID:
2025.emnlp-main.48
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
927–945
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.48/
DOI:
Bibkey:
Cite (ACL):
Yuhang Chen, Zhen Tan, and Tianlong Chen. 2025. EQA-RM: A Generative Embodied Reward Model with Test-time Scaling. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 927–945, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
EQA-RM: A Generative Embodied Reward Model with Test-time Scaling (Chen et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.48.pdf
Checklist:
 2025.emnlp-main.48.checklist.pdf