Are Sample-Efficient NLP Models More Robust?

Nelson F. Liu, Ananya Kumar, Percy Liang, Robin Jia


Abstract
Recent results in image classification and extractive question answering have observed that pre-trained models trained on less in-distribution data have better out-ofdistribution performance. However, it is unclear how broadly these trends hold. We conduct a large empirical study across three tasks, three broadly-applicable modeling interventions (increasing model size, using a different adaptation method, and pre-training on more data), and 14 diverse datasets to investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation). We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others. On individual datasets, models with lower sample efficiency can even be more robust. These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent. Even in an era of large, multi-purpose pre-trained models, task-specific decisions may often be necessary for OOD generalization.
Anthology ID:
2023.acl-short.144
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1689–1709
Language:
URL:
https://aclanthology.org/2023.acl-short.144
DOI:
10.18653/v1/2023.acl-short.144
Bibkey:
Cite (ACL):
Nelson F. Liu, Ananya Kumar, Percy Liang, and Robin Jia. 2023. Are Sample-Efficient NLP Models More Robust?. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1689–1709, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Are Sample-Efficient NLP Models More Robust? (Liu et al., ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.acl-short.144.pdf