The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models
Satya Sai Srinath Namburi, Makesh Sreedhar, Srinath Srinivasan, Frederic Sala
Abstract
Compressing large language models (LLMs), often consisting of billions of parameters, provides faster inference, smaller memory footprints, and enables local deployment. The standard compression techniques are pruning and quantization, with the former eliminating redundant connections in model layers and the latter representing model parameters with as little as 4 bits. The key tradeoff is between the degree of compression and the impact on the quality of the compressed model. Existing research on LLM compression primarily focuses on performance in terms of general metrics like perplexity or downstream task accuracy. More fine-grained metrics, such as those measuring parametric knowledge, remain significantly underexplored. To help bridge this gap, we present a comprehensive analysis across multiple model families using the LAMA and LM-Harness benchmarks in order to systematically quantify the effect of commonly employed compression techniques on model performance. A particular focus is on tradeoffs involving parametric knowledge, with the goal of providing practitioners with practical insights to make informed decisions on compression.- Anthology ID:
- 2023.findings-emnlp.349
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5255–5273
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.349
- DOI:
- 10.18653/v1/2023.findings-emnlp.349
- Cite (ACL):
- Satya Sai Srinath Namburi, Makesh Sreedhar, Srinath Srinivasan, and Frederic Sala. 2023. The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5255–5273, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models (Namburi et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2023.findings-emnlp.349.pdf