LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization

Xujia Wang, Yunjia Qi, Bin Xu


Abstract
Parameter-Efficient Fine-Tuning (PEFT) methods, such as LoRA, significantly reduce the number of trainable parameters by introducing low-rank decomposition matrices. However, existing methods perform extensive matrix multiplications in domain specialization tasks, resulting in computational inefficiency and sub-optimal fine-tuning performance. Hence, we propose LoSiA (**Lo**w-Resources **S**ubnet **I**ntegration **A**daptation), an innovative method that dynamically localizes and optimizes critical parameters during the training process. Specifically, it identifies a sub-network using gradient sparsity analysis and optimizes it as the trainable target. This design enables effective high-rank adaptation by updating only the sub-network parameters, reducing the additional matrix multiplication. We also present LoSiA-Pro, a faster implementation of LoSiA, which reduces the training latency by about 27% compared to LoRA. Extensive evaluations show that our method achieves minimal performance drop compared to full fine-tuning, while requiring the least training time across domain specialization and common-sense reasoning tasks. Further analysis shows that LoSiA also reduces forgetting during continued training.
Anthology ID:
2025.emnlp-main.340
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6707–6726
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.340/
DOI:
Bibkey:
Cite (ACL):
Xujia Wang, Yunjia Qi, and Bin Xu. 2025. LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 6707–6726, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.340.pdf
Checklist:
 2025.emnlp-main.340.checklist.pdf