Abstract
We build a reference for the task of Open Information Extraction, on five documents. We tentatively resolve a number of issues that arise, including coreference and granularity, and we take steps toward addressing inference, a significant problem. We seek to better pinpoint the requirements for the task. We produce our annotation guidelines specifying what is correct to extract and what is not. In turn, we use this reference to score existing Open IE systems. We address the non-trivial problem of evaluating the extractions produced by systems against the reference tuples, and share our evaluation script. Among seven compared extractors, we find the MinIE system to perform best.- Anthology ID:
- W19-4002
- Volume:
- Proceedings of the 13th Linguistic Annotation Workshop
- Month:
- August
- Year:
- 2019
- Address:
- Florence, Italy
- Venue:
- LAW
- SIG:
- SIGANN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6–15
- Language:
- URL:
- https://aclanthology.org/W19-4002
- DOI:
- 10.18653/v1/W19-4002
- Cite (ACL):
- William Lechelle, Fabrizio Gotti, and Phillippe Langlais. 2019. WiRe57 : A Fine-Grained Benchmark for Open Information Extraction. In Proceedings of the 13th Linguistic Annotation Workshop, pages 6–15, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- WiRe57 : A Fine-Grained Benchmark for Open Information Extraction (Lechelle et al., LAW 2019)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/W19-4002.pdf
- Code
- rali-udem/WiRe57
- Data
- QA-SRL