Maintaining Quality in FEVER Annotation

Leon Derczynski, Julie Binau, Henri Schulte


Abstract
We propose two measures for measuring the quality of constructed claims in the FEVER task. Annotating data for this task involves the creation of supporting and refuting claims over a set of evidence. Automatic annotation processes often leave superficial patterns in data, which learning systems can detect instead of performing the underlying task. Humans also can leave these superficial patterns, either voluntarily or involuntarily (due to e.g. fatigue). The two measures introduced attempt to detect the impact of these superficial patterns. One is a new information-theoretic and distributionality based measure, DCI; and the other an extension of neural probing work over the ARCT task, utility. We demonstrate these measures over a recent major dataset, that from the English FEVER task in 2019.
Anthology ID:
2020.fever-1.6
Volume:
Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER)
Month:
July
Year:
2020
Address:
Online
Venue:
FEVER
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
42–46
Language:
URL:
https://aclanthology.org/2020.fever-1.6
DOI:
10.18653/v1/2020.fever-1.6
Bibkey:
Cite (ACL):
Leon Derczynski, Julie Binau, and Henri Schulte. 2020. Maintaining Quality in FEVER Annotation. In Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER), pages 42–46, Online. Association for Computational Linguistics.
Cite (Informal):
Maintaining Quality in FEVER Annotation (Derczynski et al., FEVER 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.fever-1.6.pdf
Video:
 http://slideslive.com/38929664