Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings

Hendrik Schuff, Hsiu-Yu Yang, Heike Adel, Ngoc Thang Vu


Abstract
Natural language inference (NLI) requires models to learn and apply commonsense knowledge. These reasoning abilities are particularly important for explainable NLI systems that generate a natural language explanation in addition to their label prediction. The integration of external knowledge has been shown to improve NLI systems, here we investigate whether it can also improve their explanation capabilities. For this, we investigate different sources of external knowledge and evaluate the performance of our models on in-domain data as well as on special transfer datasets that are designed to assess fine-grained reasoning capabilities. We find that different sources of knowledge have a different effect on reasoning abilities, for example, implicit knowledge stored in language models can hinder reasoning on numbers and negations. Finally, we conduct the largest and most fine-grained explainable NLI crowdsourcing study to date. It reveals that even large differences in automatic performance scores do neither reflect in human ratings of label, explanation, commonsense nor grammar correctness.
Anthology ID:
2021.blackboxnlp-1.3
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–41
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2021.blackboxnlp-1.3/
DOI:
10.18653/v1/2021.blackboxnlp-1.3
Bibkey:
Cite (ACL):
Hendrik Schuff, Hsiu-Yu Yang, Heike Adel, and Ngoc Thang Vu. 2021. Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 26–41, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings (Schuff et al., BlackboxNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2021.blackboxnlp-1.3.pdf
Video:
 https://preview.aclanthology.org/build-pipeline-with-new-library/2021.blackboxnlp-1.3.mp4
Code
 boschresearch/external-knowledge-explainable-nli
Data
ConceptNetSNLIe-SNLI