The Doctor Will Agree With You Now: Sycophancy of Large Language Models in Multi-Turn Medical Conversations
Taeil Matthew Kim, Luyang Luo, Sung Eun Kim, Arjun Kumar Manrai, Eric Topol, Pranav Rajpurkar
Abstract
Large language models (LLMs) increasingly exhibit sycophancy—the tendency to conform to user beliefs rather than provide factually accurate information—posing significant risks in healthcare applications where reliability is paramount. We evaluate sycophantic behavior in ten LLMs from OpenAI, Google, and Anthropic across multi-turn medical conversations using an escalatory pushback framework. To enable fine-grained analysis, we introduce Resistance, a metric that measures nonconformity to user stances at each conversational turn, providing insights beyond existing flip-based metrics. Evaluating on MedCaseReasoning (open-ended diagnostic questions) and PubMedQA (clear-answer biomedical questions), we find that Gemini models exhibit the highest Resistance, followed by OpenAI and Claude models. We further observe that response patterns ("Yes, but..." vs. "Yes, and...") may be more predictive of sycophancy than specific phrases. Notably, all models are more easily persuaded to change their answers on clear multiple-choice questions than on ambiguous diagnostic cases. Our findings highlight critical vulnerabilities in deploying LLMs for clinical decision support and suggest that training toward contradiction-maintaining response patterns may serve as a potential mitigation strategy.- Anthology ID:
- 2026.healing-1.2
- Volume:
- Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
- Month:
- March
- Year:
- 2026
- Address:
- Rabat, Morocco
- Editors:
- Vera Danilova, Murathan Kurfalı, Ylva Söderfeldt, Julia Reed, Andrew Burchell
- Venues:
- HeaLing | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 19–34
- Language:
- URL:
- https://preview.aclanthology.org/ingest-eacl/2026.healing-1.2/
- DOI:
- Cite (ACL):
- Taeil Matthew Kim, Luyang Luo, Sung Eun Kim, Arjun Kumar Manrai, Eric Topol, and Pranav Rajpurkar. 2026. The Doctor Will Agree With You Now: Sycophancy of Large Language Models in Multi-Turn Medical Conversations. In Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026), pages 19–34, Rabat, Morocco. Association for Computational Linguistics.
- Cite (Informal):
- The Doctor Will Agree With You Now: Sycophancy of Large Language Models in Multi-Turn Medical Conversations (Kim et al., HeaLing 2026)
- PDF:
- https://preview.aclanthology.org/ingest-eacl/2026.healing-1.2.pdf