Med-VRAgent: A Framework for Medical Visual Reasoning-Enhanced Agents

Guangfu Guo, Xiaoqian Lu, Yue Feng


Abstract
Vision-language models (VLMs) achieve promising results in medical reasoning but struggle with hallucinations, vague descriptions, Inconsistent logic and poor localization. To address this, we propose a agent framework named Medical Visual Reasoning Agent (Med-VRAgent). The approach is based on Visual Guidance and Self-Reward paradigms and Monte Carlo Tree Search (MCTS). By combining the Visual Guidance with tree search, Med-VRAgent improves the medical visual reasoning capabilities of VLMs. We use the trajectories collected by Med-RAgent as feedback to further improve the performance by fine-tuning the VLMs with the proximal policy optimization (PPO) objective. Experiments on multiple medical VQA benchmarks demonstrate that our method outperforms existing approaches.
Anthology ID:
2025.emnlp-main.939
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18613–18627
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.939/
DOI:
10.18653/v1/2025.emnlp-main.939
Bibkey:
Cite (ACL):
Guangfu Guo, Xiaoqian Lu, and Yue Feng. 2025. Med-VRAgent: A Framework for Medical Visual Reasoning-Enhanced Agents. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18613–18627, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Med-VRAgent: A Framework for Medical Visual Reasoning-Enhanced Agents (Guo et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.939.pdf
Checklist:
 2025.emnlp-main.939.checklist.pdf