Abstract
We investigate the effects of post-training quantization and quantization-aware training on the generalization of Transformer language models. We present a new method called self-distilled quantization (SDQ) that minimizes accumulative quantization errors and outperforms baselines. We apply SDQ to multilingual models XLM-RBase and InfoXLMBase and demonstrate that both models can be reduced from 32-bit floating point weights to 8-bit integer weights while maintaining a high level of performance on the XGLUE benchmark. Our results also highlight the challenges of quantizing multilingual models, which must generalize to languages they were not fine-tuned on.- Anthology ID:
- 2023.acl-short.114
- Volume:
- Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1329–1339
- Language:
- URL:
- https://aclanthology.org/2023.acl-short.114
- DOI:
- 10.18653/v1/2023.acl-short.114
- Cite (ACL):
- James O’Neill and Sourav Dutta. 2023. Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1329–1339, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models (O’Neill & Dutta, ACL 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2023.acl-short.114.pdf