Zero-Shot Relation Extraction via Reading Comprehension

Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer

[How to correct problems with metadata yourself]


Abstract
We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.
Anthology ID:
K17-1034
Volume:
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Roger Levy, Lucia Specia
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
333–342
Language:
URL:
https://aclanthology.org/K17-1034
DOI:
10.18653/v1/K17-1034
Bibkey:
Cite (ACL):
Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-Shot Relation Extraction via Reading Comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333–342, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Relation Extraction via Reading Comprehension (Levy et al., CoNLL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/K17-1034.pdf
Code
 additional community code
Data
SQuADWikiReading