Knowledge-Aware Co-Reasoning for Multidisciplinary Collaboration

Xurui Li, Wanghaijiao, Kaisong Song, Rui Zhu, Haixu Tang


Abstract
Large language models (LLMs) have shown significant potential to improve diagnostic performance for clinical professionals. Existing multi-agent paradigms rely mainly on prompt engineering, suffering from improper agent selection and insufficient knowledge integration. In this work, we propose a novel framework KACR (Knowledge-Aware Co-Reasoning) that integrates structured knowledge reasoning into multidisciplinary collaboration from two aspects: (1) a reinforcement learning-optimized agent that uses clinical knowledge graphs to guide dynamic discipline determination; (2) a multidisciplinary collaboration strategy that enables robust consensus through integration of domain-specific expertise and interdisciplinary persuasion mechanism. Extensive experiments conducted on both academic and real-world datasets demonstrate the effectiveness of our method.
Anthology ID:
2025.emnlp-main.687
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13615–13631
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.687/
DOI:
Bibkey:
Cite (ACL):
Xurui Li, Wanghaijiao, Kaisong Song, Rui Zhu, and Haixu Tang. 2025. Knowledge-Aware Co-Reasoning for Multidisciplinary Collaboration. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 13615–13631, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Knowledge-Aware Co-Reasoning for Multidisciplinary Collaboration (Li et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.687.pdf
Checklist:
 2025.emnlp-main.687.checklist.pdf