Benchmarking Automated Clinical Language Simplification: Dataset, Algorithm, and Evaluation

Junyu Luo, Junxian Lin, Chi Lin, Cao Xiao, Xinning Gui, Fenglong Ma


Abstract
Patients with low health literacy usually have difficulty understanding medical jargon and the complex structure of professional medical language. Although some studies are proposed to automatically translate expert language into layperson-understandable language, only a few of them focus on both accuracy and readability aspects simultaneously in the clinical domain. Thus, simplification of the clinical language is still a challenging task, but unfortunately, it is not yet fully addressed in previous work. To benchmark this task, we construct a new dataset named MedLane to support the development and evaluation of automated clinical language simplification approaches. Besides, we propose a new model called DECLARE that follows the human annotation procedure and achieves state-of-the-art performance compared with eight strong baselines. To fairly evaluate the performance, we also propose three specific evaluation metrics. Experimental results demonstrate the utility of the annotated MedLane dataset and the effectiveness of the proposed model DECLARE.
Anthology ID:
2022.coling-1.313
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3550–3562
Language:
URL:
https://aclanthology.org/2022.coling-1.313
DOI:
Bibkey:
Cite (ACL):
Junyu Luo, Junxian Lin, Chi Lin, Cao Xiao, Xinning Gui, and Fenglong Ma. 2022. Benchmarking Automated Clinical Language Simplification: Dataset, Algorithm, and Evaluation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3550–3562, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Benchmarking Automated Clinical Language Simplification: Dataset, Algorithm, and Evaluation (Luo et al., COLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.coling-1.313.pdf