Shared Task in Evaluating Accuracy: Leveraging Pre-Annotations in the Validation Process

Nicolas Garneau, Luc Lamontagne

[How to correct problems with metadata yourself]


Abstract
We hereby present our submission to the Shared Task in Evaluating Accuracy at the INLG 2021 Conference. Our evaluation protocol relies on three main components; rules and text classifiers that pre-annotate the dataset, a human annotator that validates the pre-annotations, and a web interface that facilitates this validation. Our submission consists in fact of two submissions; we first analyze solely the performance of the rules and classifiers (pre-annotations), and then the human evaluation aided by the former pre-annotations using the web interface (hybrid). The code for the web interface and the classifiers is publicly available.
Anthology ID:
2021.inlg-1.26
Volume:
Proceedings of the 14th International Conference on Natural Language Generation
Month:
August
Year:
2021
Address:
Aberdeen, Scotland, UK
Editors:
Anya Belz, Angela Fan, Ehud Reiter, Yaji Sripada
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
266–270
Language:
URL:
https://aclanthology.org/2021.inlg-1.26
DOI:
10.18653/v1/2021.inlg-1.26
Bibkey:
Cite (ACL):
Nicolas Garneau and Luc Lamontagne. 2021. Shared Task in Evaluating Accuracy: Leveraging Pre-Annotations in the Validation Process. In Proceedings of the 14th International Conference on Natural Language Generation, pages 266–270, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Cite (Informal):
Shared Task in Evaluating Accuracy: Leveraging Pre-Annotations in the Validation Process (Garneau & Lamontagne, INLG 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/2021.inlg-1.26.pdf