DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents

Varun Nair, Elliot Schumacher, Geoffrey Tso, Anitha Kannan


Abstract
Large language models (LLMs) have emerged as valuable tools for many natural language understanding tasks. In safety-critical applications such as healthcare, the utility of these models is governed by their ability to generate factually accurate and complete outputs. In this work, we present dialog-enabled resolving agents (DERA). DERA is a paradigm made possible by the increased conversational abilities of LLMs. It provides a simple, interpretable forum for models to communicate feedback and iteratively improve output. We frame our dialog as a discussion between two agent types – a Researcher, who processes information and identifies crucial problem components, and a Decider, who has the autonomy to integrate the Researcher’s information and makes judgments on the final output.We test DERA against three clinically-focused tasks, with GPT-4 serving as our LLM. DERA shows significant improvement over the base GPT-4 performance in both human expert preference evaluations and quantitative metrics for medical conversation summarization and care plan generation. In a new finding, we also show that GPT-4’s performance (70%) on an open-ended version of the MedQA question-answering (QA) dataset (Jin 2021; USMLE) is well above the passing level (60%), with DERA showing similar performance. We will release the open-ended MedQA dataset.
Anthology ID:
2024.clinicalnlp-1.12
Volume:
Proceedings of the 6th Clinical Natural Language Processing Workshop
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Danielle Bitterman
Venues:
ClinicalNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
122–161
Language:
URL:
https://aclanthology.org/2024.clinicalnlp-1.12
DOI:
Bibkey:
Cite (ACL):
Varun Nair, Elliot Schumacher, Geoffrey Tso, and Anitha Kannan. 2024. DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents. In Proceedings of the 6th Clinical Natural Language Processing Workshop, pages 122–161, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents (Nair et al., ClinicalNLP-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.clinicalnlp-1.12.pdf