Joint Learning for Event Coreference Resolution

Jing Lu, Vincent Ng

[How to correct problems with metadata yourself]


Abstract
While joint models have been developed for many NLP tasks, the vast majority of event coreference resolvers, including the top-performing resolvers competing in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, are pipeline-based, where the propagation of errors from the trigger detection component to the event coreference component is a major performance limiting factor. To address this problem, we propose a model for jointly learning event coreference, trigger detection, and event anaphoricity. Our joint model is novel in its choice of tasks and its features for capturing cross-task interactions. To our knowledge, this is the first attempt to train a mention-ranking model and employ event anaphoricity for event coreference. Our model achieves the best results to date on the KBP 2016 English and Chinese datasets.
Anthology ID:
P17-1009
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
90–101
Language:
URL:
https://aclanthology.org/P17-1009
DOI:
10.18653/v1/P17-1009
Bibkey:
Cite (ACL):
Jing Lu and Vincent Ng. 2017. Joint Learning for Event Coreference Resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 90–101, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Joint Learning for Event Coreference Resolution (Lu & Ng, ACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/P17-1009.pdf
Video:
 https://preview.aclanthology.org/teach-a-man-to-fish/P17-1009.mp4