Abstract
Despite their success, modern language models are fragile. Even small changes in their training pipeline can lead to unexpected results. We study this phenomenon by examining the robustness of ALBERT (Lan et al., 2020) in combination with Stochastic Weight Averaging (SWA)—a cheap way of ensembling—on a sentiment analysis task (SST-2). In particular, we analyze SWA’s stability via CheckList criteria (Ribeiro et al., 2020), examining the agreement on errors made by models differing only in their random seed. We hypothesize that SWA is more stable because it ensembles model snapshots taken along the gradient descent trajectory. We quantify stability by comparing the models’ mistakes with Fleiss’ Kappa (Fleiss, 1971) and overlap ratio scores. We find that SWA reduces error rates in general; yet the models still suffer from their own distinct biases (according to CheckList).- Anthology ID:
- 2021.eval4nlp-1.3
- Volume:
- Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Editors:
- Yang Gao, Steffen Eger, Wei Zhao, Piyawat Lertvittayakumjorn, Marina Fomicheva
- Venue:
- Eval4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 16–31
- Language:
- URL:
- https://aclanthology.org/2021.eval4nlp-1.3
- DOI:
- 10.18653/v1/2021.eval4nlp-1.3
- Cite (ACL):
- Urja Khurana, Eric Nalisnick, and Antske Fokkens. 2021. How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 16–31, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task (Khurana et al., Eval4NLP 2021)
- PDF:
- https://preview.aclanthology.org/corrections-2024-07/2021.eval4nlp-1.3.pdf
- Code
- cltl/robustness-albert
- Data
- GLUE, SST, SST-2