Beyond Accuracy: Revisiting Out-of-Distribution Generalization in NLI Models

Zahra Delbari, Mohammad Taher Pilehvar


Abstract
This study investigates how well discriminative transformers generalize in Natural Language Inference (NLI) tasks. We specifically focus on a well-studied bias in this task: the tendency of models to rely on superficial features and dataset biases rather than a true understanding of language. We argue that the performance differences observed between training and analysis datasets do not necessarily indicate a lack of knowledge within the model. Instead, the gap often points to a misalignment between the decision boundaries of the classifier head and the representations learned by the encoder for the analysis samples. By investigating the representation space of NLI models across different analysis datasets, we demonstrate that even when the accuracy is nearly random in some settings, still samples from opposing classes remain almost perfectly linearly separable in the encoder’s representation space. This suggests that, although the classifier head may fail on analysis data, the encoder still generalizes and encodes representations that allow for effective discrimination between NLI classes.
Anthology ID:
2025.conll-1.36
Volume:
Proceedings of the 29th Conference on Computational Natural Language Learning
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Gemma Boleda, Michael Roth
Venues:
CoNLL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
557–570
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.36/
DOI:
Bibkey:
Cite (ACL):
Zahra Delbari and Mohammad Taher Pilehvar. 2025. Beyond Accuracy: Revisiting Out-of-Distribution Generalization in NLI Models. In Proceedings of the 29th Conference on Computational Natural Language Learning, pages 557–570, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Beyond Accuracy: Revisiting Out-of-Distribution Generalization in NLI Models (Delbari & Pilehvar, CoNLL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.36.pdf