Incorporating Centering Theory into Neural Coreference Resolution

Haixia Chai, Michael Strube


Abstract
In recent years, transformer-based coreference resolution systems have achieved remarkable improvements on the CoNLL dataset. However, how coreference resolvers can benefit from discourse coherence is still an open question. In this paper, we propose to incorporate centering transitions derived from centering theory in the form of a graph into a neural coreference model. Our method improves the performance over the SOTA baselines, especially on pronoun resolution in long documents, formal well-structured text, and clusters with scattered mentions.
Anthology ID:
2022.naacl-main.218
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2996–3002
Language:
URL:
https://aclanthology.org/2022.naacl-main.218
DOI:
10.18653/v1/2022.naacl-main.218
Bibkey:
Cite (ACL):
Haixia Chai and Michael Strube. 2022. Incorporating Centering Theory into Neural Coreference Resolution. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2996–3002, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Incorporating Centering Theory into Neural Coreference Resolution (Chai & Strube, NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2022.naacl-main.218.pdf
Software:
 2022.naacl-main.218.software.zip
Video:
 https://preview.aclanthology.org/improve-issue-templates/2022.naacl-main.218.mp4
Code
 haixiachai/ct-coref
Data
GAP Coreference Dataset