BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance

R. Thomas McCoy, Junghyun Min, Tal Linzen


Abstract
If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we fine-tuned 100 instances of BERT on the Multi-genre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that “the doctor visited the lawyer” does not entail “the lawyer visited the doctor”), accuracy ranged from 0.0% to 66.2%. Such variation is likely due to the presence of many local minima in the loss surface that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.
Anthology ID:
2020.blackboxnlp-1.21
Volume:
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Afra Alishahi, Yonatan Belinkov, Grzegorz Chrupała, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
217–227
Language:
URL:
https://aclanthology.org/2020.blackboxnlp-1.21
DOI:
10.18653/v1/2020.blackboxnlp-1.21
Bibkey:
Cite (ACL):
R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2020. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 217–227, Online. Association for Computational Linguistics.
Cite (Informal):
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance (McCoy et al., BlackboxNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2020.blackboxnlp-1.21.pdf
Video:
 https://slideslive.com/38939766
Data
MultiNLI