Reasoning or Knowledge: Stratified Evaluation of Biomedical LLMs

Rahul Thapa, Qingyang Wu, Kevin Wu, Harrison G Zhang, Angela Zhang, Eric Wu, Haotian Ye, James Zou


Abstract
Medical reasoning in large language models seeks to replicate clinicians’ cognitive processes in interpreting patient data and making diagnostic decisions. However, widely used benchmarks—such as MedQA, MedMCQA, and PubMedQA—mix questions that require multi-step reasoning with those answerable through factual recall, complicating evaluation. We introduce an expert-validated evaluation framework that disentangles knowledge recall from reasoning by training a PubMedBERT-based classifier and applying it to 11 widely used biomedical QA benchmarks. This framework reveals that only 32.8% of questions require multi-step reasoning, indicating that current evaluations largely measure factual recall. Stratified evaluation of biomedical models (HuatuoGPT-o1, MedReason, m1) and general-domain models (DeepSeek-R1, o4-mini, Qwen3) consistently shows lower performance on reasoning-heavy than knowledge-heavy questions (e.g., HuatuoGPT-o1: 56.9% on knowledge vs.44.8% on reasoning). Beyond aggregate accuracy, we assess robustness through adversarial evaluations in which models are prefixed with uncertainty-inducing, incorrect statements; biomedical reasoning models degrade sharply in this setting (e.g., MedReason: 50.4% to 24.4%), with declines especially pronounced on reasoning-heavy questions. Finally, we show that fine-tuning on high-quality, reasoning-heavy examples augmented with adversarial traces, followed by reinforcement learning with GRPO, improves both robustness and accuracy across knowledge and reasoning subsets within our evaluation framework.
Anthology ID:
2026.eacl-long.111
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2450–2483
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.111/
DOI:
Bibkey:
Cite (ACL):
Rahul Thapa, Qingyang Wu, Kevin Wu, Harrison G Zhang, Angela Zhang, Eric Wu, Haotian Ye, and James Zou. 2026. Reasoning or Knowledge: Stratified Evaluation of Biomedical LLMs. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2450–2483, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Reasoning or Knowledge: Stratified Evaluation of Biomedical LLMs (Thapa et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.111.pdf