GeLoRA: Geometric Adaptive Ranks For Efficient LoRA Fine-tuning

Abdessalam Ed-dib, Zhanibek Datbayev, Amine M. Aboussalah


Abstract
Fine-tuning large language models (LLMs) is computationally expensive because it requires updating all model parameters. Low-Rank Adaptation (LoRA) reduces this cost by modifying a subset of weights, but selecting the appropriate rank introduces a trade-off: lower ranks improve efficiency at the expense of expressivity, while higher ranks enhance performance but increase computational burden. Existing adaptive LoRA methods lack a theoretical foundation to guide this trade-off optimally. We propose Geometric Low-Rank Adaptation (GeLoRA), a principled approach that estimates the intrinsic dimensionality of hidden data representations to adaptively select LoRA ranks. We show theoretically and empirically that the intrinsic dimension serves as a lower bound for the optimal rank of LoRA matrices, enabling a balance between efficiency and expressivity. Extensive experiments on GLUE, SQuAD (with DeBERTa), and MT-Bench (with LLaMA) demonstrate that GeLoRA consistently outperforms recent adaptive LoRA methods by up to +1.0%, while simultaneously reducing computational time by 13.5% to 64.2%, depending on the baseline, under the same parameter budget.
Anthology ID:
2025.findings-emnlp.1372
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25174–25196
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.1372/
DOI:
10.18653/v1/2025.findings-emnlp.1372
Bibkey:
Cite (ACL):
Abdessalam Ed-dib, Zhanibek Datbayev, and Amine M. Aboussalah. 2025. GeLoRA: Geometric Adaptive Ranks For Efficient LoRA Fine-tuning. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 25174–25196, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
GeLoRA: Geometric Adaptive Ranks For Efficient LoRA Fine-tuning (Ed-dib et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.1372.pdf
Checklist:
 2025.findings-emnlp.1372.checklist.pdf