Ecologically Valid Explanations for Label Variation in NLI

Nan-Jiang Jiang, Chenhao Tan, Marie-Catherine de Marneffe


Abstract
Human label variation, or annotation disagreement, exists in many natural language processing (NLP) tasks, including natural language inference (NLI). To gain direct evidence of how NLI label variation arises, we build LiveNLI, an English dataset of 1,415 ecologically valid explanations (annotators explain the NLI labels they chose) for 122 MNLI items (at least 10 explanations per item). The LiveNLI explanations confirm that people can systematically vary on their interpretation and highlight within-label variation: annotators sometimes choose the same label for different reasons. This suggests that explanations are crucial for navigating label interpretations in general. We few-shot prompt large language models to generate explanations but the results are inconsistent: they sometimes produces valid and informative explanations, but it also generates implausible ones that do not support the label, highlighting directions for improvement.
Anthology ID:
2023.findings-emnlp.712
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10622–10633
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.712
DOI:
10.18653/v1/2023.findings-emnlp.712
Bibkey:
Cite (ACL):
Nan-Jiang Jiang, Chenhao Tan, and Marie-Catherine de Marneffe. 2023. Ecologically Valid Explanations for Label Variation in NLI. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10622–10633, Singapore. Association for Computational Linguistics.
Cite (Informal):
Ecologically Valid Explanations for Label Variation in NLI (Jiang et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.findings-emnlp.712.pdf