Damir Juric
2022
Human Evaluation and Correlation with Automatic Metrics in Consultation Note Generation
Francesco Moramarco
|
Alex Papadopoulos Korfiatis
|
Mark Perera
|
Damir Juric
|
Jack Flann
|
Ehud Reiter
|
Anya Belz
|
Aleksandar Savkov
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In recent years, machine learning models have rapidly become better at generating clinical consultation notes; yet, there is little work on how to properly evaluate the generated consultation notes to understand the impact they may have on both the clinician using them and the patient’s clinical safety. To address this we present an extensive human evaluation study of consultation notes where 5 clinicians (i) listen to 57 mock consultations, (ii) write their own notes, (iii) post-edit a number of automatically generated notes, and (iv) extract all the errors, both quantitative and qualitative. We then carry out a correlation study with 18 automatic quality metrics and the human judgements. We find that a simple, character-based Levenshtein distance metric performs on par if not better than common model-based metrics like BertScore. All our findings and annotations are open-sourced.
2021
Towards Objectively Evaluating the Quality of Generated Medical Summaries
Francesco Moramarco
|
Damir Juric
|
Aleksandar Savkov
|
Ehud Reiter
Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)
We propose a method for evaluating the quality of generated text by asking evaluators to count facts, and computing precision, recall, f-score, and accuracy from the raw counts. We believe this approach leads to a more objective and easier to reproduce evaluation. We apply this to the task of medical report summarisation, where measuring objective quality and accuracy is of paramount importance.
Search