End-to-end Deep Reinforcement Learning Based Coreference Resolution

Hongliang Fei, Xu Li, Dingcheng Li, Ping Li

[How to correct problems with metadata yourself]


Abstract
Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higher-order mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy regularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark.
Anthology ID:
P19-1064
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
660–665
Language:
URL:
https://aclanthology.org/P19-1064
DOI:
10.18653/v1/P19-1064
Bibkey:
Cite (ACL):
Hongliang Fei, Xu Li, Dingcheng Li, and Ping Li. 2019. End-to-end Deep Reinforcement Learning Based Coreference Resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 660–665, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
End-to-end Deep Reinforcement Learning Based Coreference Resolution (Fei et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/P19-1064.pdf
Data
CoNLLCoNLL-2012OntoNotes 5.0