SubmissionNumber#=%=#49 FinalPaperTitle#=%=#Questioning Our Questions: How Well Do Medical QA Benchmarks Evaluate Clinical Capabilities of Language Models? ShortPaperTitle#=%=# NumberOfPages#=%=#23 CopyrightSigned#=%=#Siun Kim JobTitle#==#Assistant Research Professor Organization#==#Seoul National University Hospital, Biomedical Research Institute 101, Daehak-ro Jongno-gu, Seoul 03080, South Korea Abstract#==#Recent advances in large language models (LLMs) have led to impressive performance on medical question-answering (QA) benchmarks. However, the extent to which these benchmarks reflect real-world clinical capabilities remains uncertain. To address this gap, we systematically analyzed the correlation between LLM performance on major medical QA benchmarks (e.g., MedQA, MedMCQA, PubMedQA, and MMLU medicine subjects) and clinical performance in real-world settings. Our dataset included 702 clinical evaluations of 85 LLMs from 168 studies. Benchmark scores demonsrated a moderate correlation with clinical performance (Spearman's rho = 0.59), albeit substantially lower than inter-benchmark correlations. Among them, MedQA was the most predictive but failed to capture essential competencies such as patient communication, longitudinal care, and clinical information extraction. Using Bayesian hierarchical modeling, we estimated representative clinical performance and identified GPT-4 and GPT-4o as consistently top-performing models, often matching or exceeding human physicians. Despite longstanding concerns about the clinical validity of medical QA benchmarks, this study offers the first quantitative analysis of their alignment with real-world clinical performance. Author{1}{Firstname}#=%=#Siun Author{1}{Lastname}#=%=#Kim Author{1}{Username}#=%=#shiuhn95 Author{1}{Email}#=%=#shiuhn95@snu.ac.kr Author{1}{Affiliation}#=%=#Seoul Natoinal University Hospital Author{2}{Firstname}#=%=#Hyung-Jin Author{2}{Lastname}#=%=#Yoon Author{2}{Email}#=%=#hjyoon@snu.ac.kr Author{2}{Affiliation}#=%=#Biomedical Engineering, Seoul National University College of Medicine ========== èéáğö