Aligning Factual Consistency for Clinical Studies Summarization through Reinforcement Learning

Xiangru Tang, Arman Cohan, Mark Gerstein


Abstract
In the rapidly evolving landscape of medical research, accurate and concise summarization of clinical studies is crucial to support evidence-based practice. This paper presents a novel approach to clinical studies summarization, leveraging reinforcement learning to enhance factual consistency and align with human annotator preferences. Our work focuses on two tasks: Conclusion Generation and Review Generation. We train a CONFIT summarization model that outperforms GPT-3 and previous state-of-the-art models on the same datasets and collects expert and crowd-worker annotations to evaluate the quality and factual consistency of the generated summaries. These annotations enable us to measure the correlation of various automatic metrics, including modern factual evaluation metrics like QAFactEval, with human-assessed factual consistency. By employing top-correlated metrics as objectives for a reinforcement learning model, we demonstrate improved factuality in generated summaries that are preferred by human annotators.
Anthology ID:
2023.clinicalnlp-1.7
Volume:
Proceedings of the 5th Clinical Natural Language Processing Workshop
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Anna Rumshisky
Venue:
ClinicalNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–58
Language:
URL:
https://aclanthology.org/2023.clinicalnlp-1.7
DOI:
10.18653/v1/2023.clinicalnlp-1.7
Bibkey:
Cite (ACL):
Xiangru Tang, Arman Cohan, and Mark Gerstein. 2023. Aligning Factual Consistency for Clinical Studies Summarization through Reinforcement Learning. In Proceedings of the 5th Clinical Natural Language Processing Workshop, pages 48–58, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Aligning Factual Consistency for Clinical Studies Summarization through Reinforcement Learning (Tang et al., ClinicalNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.clinicalnlp-1.7.pdf