Perturbed examples reveal invariances shared by language models

Ruchit Rawal, Mariya Toneva


Abstract
The rapid growth in natural language processing (NLP) research has led to numerous new models, outpacing our understanding of how they compare to established ones. One major reason for this difficulty is saturating benchmarks, which may not well reflect differences in model performance in the wild. In this work, we introduce a novel framework to compare two NLP models by revealing their shared invariance to interpretable input perturbations targeting a specific linguistic capability. Via experiments on models from the same and different architecture families, this framework offers insights about how changes in models (e.g., distillation, size increase) affect linguistic capabilities. Furthermore, our framework enables evaluation of invariances between commercial black-box models (e.g., InstructGPT family) and models that are better understood (e.g., GPT-2). Across experiments, we observe that large language models share many invariances encoded by models of various sizes, whereas the invariances by large models are only shared by other large models. Possessing a wide variety of invariances may be key to the recent successes of large language models, and our framework can shed light on the types of invariances retained or emerging in new models. We make the code publicly available.
Anthology ID:
2024.findings-acl.687
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11564–11584
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-acl.687/
DOI:
10.18653/v1/2024.findings-acl.687
Bibkey:
Cite (ACL):
Ruchit Rawal and Mariya Toneva. 2024. Perturbed examples reveal invariances shared by language models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 11564–11584, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Perturbed examples reveal invariances shared by language models (Rawal & Toneva, Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-acl.687.pdf