Xiaoman Zhang
2026
Do Mixed-Vendor Multi-Agent LLMs Improve Clinical Diagnosis?
Grace Chang Yuan | Xiaoman Zhang | Sung Eun Kim | Pranav Rajpurkar
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Grace Chang Yuan | Xiaoman Zhang | Sung Eun Kim | Pranav Rajpurkar
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Multi-agent large language model (LLM) systems have emerged as a promising approach for clinical diagnosis, leveraging collaboration among agents to refine medical reasoning. However, most existing frameworks rely on single-vendor teams (e.g., multiple agents from the same model family), which risk correlated failure modes that reinforce shared biases rather than correcting them. We investigate the impact of vendor diversity by comparing Single-LLM, Single-Vendor, and Mixed-Vendor Multi-Agent Conversation (MAC) frameworks. Using three doctor agents instantiated with o4-mini, Gemini-2.5-Pro, and Claude-4.5-Sonnet, we evaluate performance on RareBench and DiagnosisArena. Mixed-vendor configurations consistently outperform single-vendor counterparts, achieving state-of-the-art recall and accuracy. Overlap analysis reveals the underlying mechanism: mixed-vendor teams pool complementary inductive biases, surfacing correct diagnoses that individual models or homogeneous teams collectively miss. These results highlight vendor diversity as a key design principle for robust clinical diagnostic systems.
2024
RaTEScore: A Metric for Radiology Report Generation
Weike Zhao | Chaoyi Wu | Xiaoman Zhang | Ya Zhang | Yanfeng Wang | Weidi Xie
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Weike Zhao | Chaoyi Wu | Xiaoman Zhang | Ya Zhang | Yanfeng Wang | Weidi Xie
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This paper introduces a novel, entity-aware metric, termed as Radiological Report (Text) Evaluation (RaTEScore), to assess the quality of medical reports generated by AI models. RaTEScore emphasizes crucial medical entities such as diagnostic outcomes and anatomical details, and is robust against complex medical synonyms and sensitive to negation expressions. Technically, we developed a comprehensive medical NER dataset, RaTE-NER, and trained an NER model specifically for this purpose. This model enables the decomposition of complex radiological reports into constituent medical entities. The metric itself is derived by comparing the similarity of entity embeddings, obtained from a language model, based on their types and relevance to clinical significance. Our evaluations demonstrate that RaTEScore aligns more closely with human preference than existing metrics, validated both on established public benchmarks and our newly proposed RaTE-Eval benchmark.