SubmissionNumber#=%=#37 FinalPaperTitle#=%=#Advancing Biomedical Claim Verification by Using Large Language Models with Better Structured Prompting Strategies ShortPaperTitle#=%=# NumberOfPages#=%=#19 CopyrightSigned#=%=#Siting Liang JobTitle#==# Organization#==# Abstract#==#In this work, we propose a structured four-step prompting strategy that explicitly guides large language models (LLMs) through (1) claim comprehension, (2) evidence analysis, (3) intermediate conclusion, and (4) entailment decision-making to improve the accuracy of biomedical claim verification. This strategy leverages compositional and human-like reasoning to enhance logical consistency and factual grounding to reduce reliance on memorizing few-Shot exemplars and help LLMs generalize reasoning patterns across different biomedical claim verification tasks. Through extensive evaluation on biomedical NLI benchmarks, we analyze the individual contributions of each reasoning step. Our findings demonstrate that comprehension, evidence analysis, and intermediate conclusion each play distinct yet complementary roles. Systematic prompting and carefully designed step-wise instructions not only unlock the latent cognitive abilities of LLMs but also enhance interpretability by making it easier to trace errors and understand the model's reasoning process. Our research aims to improve the reliability of AI-driven biomedical claim verification. Author{1}{Firstname}#=%=#Siting Author{1}{Lastname}#=%=#Liang Author{1}{Username}#=%=#sili03 Author{1}{Email}#=%=#siting.liang@dfki.de Author{1}{Affiliation}#=%=#German Research Center for Artificial Intelligence Author{2}{Firstname}#=%=#Daniel Author{2}{Lastname}#=%=#Sonntag Author{2}{Email}#=%=#daniel.sonntag@dfki.de Author{2}{Affiliation}#=%=#German Research Center for Artificial Intelligence ========== èéáğö