Can Large Language Models Safely Address Patient Questions Following Cataract Surgery?

Mohita Chowdhury, Ernest Lim, Aisling Higham, Rory McKinnon, Nikoletta Ventoura, Yajie He, Nick De Pennington


Abstract
Recent advances in large language models (LLMs) have generated significant interest in their application across various domains including healthcare. However, there is limited data on their safety and performance in real-world scenarios. This study uses data collected using an autonomous telemedicine clinical assistant. The assistant asks symptom-based questions to elicit patient concerns and allows patients to ask questions about their post-operative recovery. We utilise real-world postoperative questions posed to the assistant by a cohort of 120 patients to examine the safety and appropriateness of responses generated by a recent popular LLM by OpenAI, ChatGPT. We demonstrate that LLMs have the potential to helpfully address routine patient queries following routine surgery. However, important limitations around the safety of today’s models exist which must be considered.
Anthology ID:
2023.clinicalnlp-1.17
Volume:
Proceedings of the 5th Clinical Natural Language Processing Workshop
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Anna Rumshisky
Venue:
ClinicalNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
131–137
Language:
URL:
https://aclanthology.org/2023.clinicalnlp-1.17
DOI:
10.18653/v1/2023.clinicalnlp-1.17
Bibkey:
Cite (ACL):
Mohita Chowdhury, Ernest Lim, Aisling Higham, Rory McKinnon, Nikoletta Ventoura, Yajie He, and Nick De Pennington. 2023. Can Large Language Models Safely Address Patient Questions Following Cataract Surgery?. In Proceedings of the 5th Clinical Natural Language Processing Workshop, pages 131–137, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Can Large Language Models Safely Address Patient Questions Following Cataract Surgery? (Chowdhury et al., ClinicalNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.clinicalnlp-1.17.pdf