Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace
Jia-Chen Zhang, Yu-Jie Xiong, Chun-Ming Xia, Dong-Hai Zhu, Xi-He Qiu
Abstract
This paper proposes a novel parameter-efficient fine-tuning method that combines the knowledge completion capability of deconvolution with the subspace learning ability, reducing the number of parameters required for fine-tuning by 8 times . Experimental results demonstrate that our method achieves superior training efficiency and performance compared to existing models.- Anthology ID:
- 2025.coling-main.265
- Original:
- 2025.coling-main.265v1
- Version 2:
- 2025.coling-main.265v2
- Volume:
- Proceedings of the 31st International Conference on Computational Linguistics
- Month:
- January
- Year:
- 2025
- Address:
- Abu Dhabi, UAE
- Editors:
- Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
- Venue:
- COLING
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3924–3935
- Language:
- URL:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2025.coling-main.265/
- DOI:
- Cite (ACL):
- Jia-Chen Zhang, Yu-Jie Xiong, Chun-Ming Xia, Dong-Hai Zhu, and Xi-He Qiu. 2025. Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace. In Proceedings of the 31st International Conference on Computational Linguistics, pages 3924–3935, Abu Dhabi, UAE. Association for Computational Linguistics.
- Cite (Informal):
- Parameter-Efficient Fine-Tuning of Large Language Models via Deconvolution in Subspace (Zhang et al., COLING 2025)
- PDF:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2025.coling-main.265.pdf