SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models

Juan Pablo Munoz, Jinjie Yuan, Nilesh Jain


Abstract
Large pre-trained models (LPMs), such as large language models, have become ubiquitous and are employed in many applications. These models are often adapted to a desired domain or downstream task through a fine-tuning stage. This paper proposes SQFT, an end-to-end solution for low-precision sparse parameter-efficient fine-tuning of LPMs, allowing for effective model manipulation in resource-constrained environments. Additionally, an innovative strategy enables the merging of sparse weights with low-rank adapters without losing sparsity and accuracy, overcoming the limitations of previous approaches. SQFT also addresses the challenge of having quantized weights and adapters with different numerical precisions, enabling merging in the desired numerical format without sacrificing accuracy. Multiple adaptation scenarios, models, and comprehensive sparsity levels demonstrate the effectiveness of SQFT.
Anthology ID:
2024.findings-emnlp.749
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12817–12832
Language:
URL:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.749/
DOI:
10.18653/v1/2024.findings-emnlp.749
Bibkey:
Cite (ACL):
Juan Pablo Munoz, Jinjie Yuan, and Nilesh Jain. 2024. SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 12817–12832, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models (Munoz et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.749.pdf