Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models

Clara Na, Sanket Vaibhav Mehta, Emma Strubell


Abstract
Model compression by way of parameter pruning, quantization, or distillation has recently gained popularity as an approach for reducing the computational requirements of modern deep neural network models for NLP. Inspired by prior works suggesting a connection between simpler, more generalizable models and those that lie within wider loss basins, we hypothesize that optimizing for flat minima should lead to simpler parameterizations and thus more compressible models. We propose to combine sharpness-aware minimization (SAM) with various task-specific model compression methods, including iterative magnitude pruning (IMP), structured pruning with a distillation objective, and post-training dynamic quantization. Empirically, we show that optimizing for flatter minima consistently leads to greater compressibility of parameters compared to vanilla Adam when fine-tuning BERT models, with little to no loss in accuracy on the GLUE text classification and SQuAD question answering benchmarks. Moreover, SAM finds superior winning tickets during IMP that 1) are amenable to vanilla Adam optimization, and 2) transfer more effectively across tasks.
Anthology ID:
2022.findings-emnlp.361
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4909–4936
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.361
DOI:
10.18653/v1/2022.findings-emnlp.361
Bibkey:
Cite (ACL):
Clara Na, Sanket Vaibhav Mehta, and Emma Strubell. 2022. Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4909–4936, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models (Na et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-emnlp.361.pdf
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-emnlp.361.mp4