Effective Use of Context in Noisy Entity Linking

David Mueller, Greg Durrett


Abstract
To disambiguate between closely related concepts, entity linking systems need to effectively distill cues from their context, which may be quite noisy. We investigate several techniques for using these cues in the context of noisy entity linking on short texts. Our starting point is a state-of-the-art attention-based model from prior work; while this model’s attention typically identifies context that is topically relevant, it fails to identify some of the most indicative surface strings, especially those exhibiting lexical overlap with the true title. Augmenting the model with convolutional networks over characters still leaves it largely unable to pick up on these cues compared to sparse features that target them directly, indicating that automatically learning how to identify relevant character-level context features is a hard problem. Our final system outperforms past work on the WikilinksNED test set by 2.8% absolute.
Anthology ID:
D18-1126
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1024–1029
Language:
URL:
https://aclanthology.org/D18-1126
DOI:
10.18653/v1/D18-1126
Bibkey:
Cite (ACL):
David Mueller and Greg Durrett. 2018. Effective Use of Context in Noisy Entity Linking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1024–1029, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Effective Use of Context in Noisy Entity Linking (Mueller & Durrett, EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ml4al-ingestion/D18-1126.pdf
Code
 davidandym/wikilinks-ned