Eric Wu
2026
Reasoning or Knowledge: Stratified Evaluation of Biomedical LLMs
Rahul Thapa | Qingyang Wu | Kevin Wu | Harrison G Zhang | Angela Zhang | Eric Wu | Haotian Ye | James Zou
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Rahul Thapa | Qingyang Wu | Kevin Wu | Harrison G Zhang | Angela Zhang | Eric Wu | Haotian Ye | James Zou
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Medical reasoning in large language models seeks to replicate clinicians’ cognitive processes in interpreting patient data and making diagnostic decisions. However, widely used benchmarks—such as MedQA, MedMCQA, and PubMedQA—mix questions that require multi-step reasoning with those answerable through factual recall, complicating evaluation. We introduce an expert-validated evaluation framework that disentangles knowledge recall from reasoning by training a PubMedBERT-based classifier and applying it to 11 widely used biomedical QA benchmarks. This framework reveals that only 32.8% of questions require multi-step reasoning, indicating that current evaluations largely measure factual recall. Stratified evaluation of biomedical models (HuatuoGPT-o1, MedReason, m1) and general-domain models (DeepSeek-R1, o4-mini, Qwen3) consistently shows lower performance on reasoning-heavy than knowledge-heavy questions (e.g., HuatuoGPT-o1: 56.9% on knowledge vs.44.8% on reasoning). Beyond aggregate accuracy, we assess robustness through adversarial evaluations in which models are prefixed with uncertainty-inducing, incorrect statements; biomedical reasoning models degrade sharply in this setting (e.g., MedReason: 50.4% to 24.4%), with declines especially pronounced on reasoning-heavy questions. Finally, we show that fine-tuning on high-quality, reasoning-heavy examples augmented with adversarial traces, followed by reinforcement learning with GRPO, improves both robustness and accuracy across knowledge and reasoning subsets within our evaluation framework.