AROMA: Autonomous Rank-one Matrix Adaptation

Hao Nan Sheng, Zhi-Yong Wang, Hing Cheung So, Mingrui Yang


Abstract
As large language models continue to grow in size, parameter-efficient fine-tuning (PEFT) has become increasingly crucial. While low-rank adaptation (LoRA) offers a solution through low-rank updates, its static rank allocation may yield suboptimal results. Adaptive low-rank adaptation (AdaLoRA) improves this with dynamic allocation but remains sensitive to initial and target rank configurations. We introduce AROMA, a framework that automatically constructs layer-specific updates by iteratively building up rank-one components with very few trainable parameters that gradually diminish to zero. Unlike existing methods that employ rank reduction mechanisms, AROMA introduces a dual-loop architecture for rank growth. The inner loop extracts information from each rank-one subspace, while the outer loop determines the number of rank-one subspaces, i.e., the optimal rank. We reset optimizer states to maintain subspace independence. AROMA significantly reduces parameters compared to LoRA and AdaLoRA while achieving superior performance on natural language understanding and generation, commonsense reasoning, offering new insights into adaptive PEFT.
Anthology ID:
2025.emnlp-main.170
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3443–3459
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.170/
DOI:
10.18653/v1/2025.emnlp-main.170
Bibkey:
Cite (ACL):
Hao Nan Sheng, Zhi-Yong Wang, Hing Cheung So, and Mingrui Yang. 2025. AROMA: Autonomous Rank-one Matrix Adaptation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 3443–3459, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
AROMA: Autonomous Rank-one Matrix Adaptation (Sheng et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.170.pdf
Checklist:
 2025.emnlp-main.170.checklist.pdf