A survey on Recognizing Textual Entailment as an NLP Evaluation

Adam Poliak


Abstract
Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems. In this survey paper, we provide an overview of different approaches for evaluating and understanding the reasoning capabilities of NLP systems. We then focus our discussion on RTE by highlighting prominent RTE datasets as well as advances in RTE dataset that focus on specific linguistic phenomena that can be used to evaluate NLP systems on a fine-grained level. We conclude by arguing that when evaluating NLP systems, the community should utilize newly introduced RTE datasets that focus on specific linguistic phenomena.
Anthology ID:
2020.eval4nlp-1.10
Volume:
Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
Month:
November
Year:
2020
Address:
Online
Venue:
Eval4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
92–109
Language:
URL:
https://aclanthology.org/2020.eval4nlp-1.10
DOI:
10.18653/v1/2020.eval4nlp-1.10
Bibkey:
Cite (ACL):
Adam Poliak. 2020. A survey on Recognizing Textual Entailment as an NLP Evaluation. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 92–109, Online. Association for Computational Linguistics.
Cite (Informal):
A survey on Recognizing Textual Entailment as an NLP Evaluation (Poliak, Eval4NLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.eval4nlp-1.10.pdf
Video:
 https://slideslive.com/38939716
Data
GLUEMultiNLISuperGLUE