How Good Is the Model in Model-in-the-loop Event Coreference Resolution Annotation?

Shafiuddin Rehan Ahmed, Abhijnan Nath, Michael Regan, Adam Pollins, Nikhil Krishnaswamy, James H. Martin


Abstract
Annotating cross-document event coreference links is a time-consuming and cognitively demanding task that can compromise annotation quality and efficiency. To address this, we propose a model-in-the-loop annotation approach for event coreference resolution, where a machine learning model suggests likely corefering event pairs only. We evaluate the effectiveness of this approach by first simulating the annotation process and then, using a novel annotator-centric Recall-Annotation effort trade-off metric, we compare the results of various underlying models and datasets. We finally present a method for obtaining 97% recall while substantially reducing the workload required by a fully manual annotation process.
Anthology ID:
2023.law-1.14
Volume:
Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Jakob Prange, Annemarie Friedrich
Venue:
LAW
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
136–145
Language:
URL:
https://aclanthology.org/2023.law-1.14
DOI:
10.18653/v1/2023.law-1.14
Bibkey:
Cite (ACL):
Shafiuddin Rehan Ahmed, Abhijnan Nath, Michael Regan, Adam Pollins, Nikhil Krishnaswamy, and James H. Martin. 2023. How Good Is the Model in Model-in-the-loop Event Coreference Resolution Annotation?. In Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII), pages 136–145, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
How Good Is the Model in Model-in-the-loop Event Coreference Resolution Annotation? (Ahmed et al., LAW 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.law-1.14.pdf
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2023.law-1.14.mp4