MedFact: A Large-scale Chinese Dataset for Evidence-based Medical Fact-checking of LLM Responses

Tong Chen, Zimu Wang, Yiyi Miao, Haoran Luo, Sun Yuanfei, Wei Wang, Zhengyong Jiang, Procheta Sen, Jionglong Su


Abstract
Medical fact-checking has become increasingly critical as more individuals seek medical information online. However, existing datasets predominantly focus on human-generated content, leaving the verification of content generated by large language models (LLMs) relatively unexplored. To address this gap, we introduce MedFact, the first evidence-based Chinese medical fact-checking dataset of LLM-generated medical content. It consists of 1,321 questions and 7,409 claims, mirroring the complexities of real-world medical scenarios. We conduct comprehensive experiments in both in-context learning (ICL) and fine-tuning settings, showcasing the capability and challenges of current LLMs on this task, accompanied by an in-depth error analysis to point out key directions for future research. Our dataset is publicly available at https://github.com/AshleyChenNLP/MedFact.
Anthology ID:
2025.emnlp-main.1646
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
32328–32341
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1646/
DOI:
Bibkey:
Cite (ACL):
Tong Chen, Zimu Wang, Yiyi Miao, Haoran Luo, Sun Yuanfei, Wei Wang, Zhengyong Jiang, Procheta Sen, and Jionglong Su. 2025. MedFact: A Large-scale Chinese Dataset for Evidence-based Medical Fact-checking of LLM Responses. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 32328–32341, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
MedFact: A Large-scale Chinese Dataset for Evidence-based Medical Fact-checking of LLM Responses (Chen et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1646.pdf
Checklist:
 2025.emnlp-main.1646.checklist.pdf