Exploring Neural Entity Representations for Semantic Information

Andrew Runge, Eduard Hovy


Abstract
Neural methods for embedding entities are typically extrinsically evaluated on downstream tasks and, more recently, intrinsically using probing tasks. Downstream task-based comparisons are often difficult to interpret due to differences in task structure, while probing task evaluations often look at only a few attributes and models. We address both of these issues by evaluating a diverse set of eight neural entity embedding methods on a set of simple probing tasks, demonstrating which methods are able to remember words used to describe entities, learn type, relationship and factual information, and identify how frequently an entity is mentioned. We also compare these methods in a unified framework on two entity linking tasks and discuss how they generalize to different model architectures and datasets.
Anthology ID:
2020.blackboxnlp-1.20
Volume:
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2020
Address:
Online
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
204–216
Language:
URL:
https://aclanthology.org/2020.blackboxnlp-1.20
DOI:
10.18653/v1/2020.blackboxnlp-1.20
Bibkey:
Cite (ACL):
Andrew Runge and Eduard Hovy. 2020. Exploring Neural Entity Representations for Semantic Information. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 204–216, Online. Association for Computational Linguistics.
Cite (Informal):
Exploring Neural Entity Representations for Semantic Information (Runge & Hovy, BlackboxNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.blackboxnlp-1.20.pdf
Code
 AJRunge523/entitylens