An Orthogonal High-Rank Adaptation for Large Language Models

Xin Zhang, Guang-Ze Chen, Shuzhen Li, Zhulin Liu, C.L.Philip Chen, Tong Zhang


Abstract
Low-rank adaptation (LoRA) efficiently adapts LLMs to downstream tasks by decomposing LLMs’ weight update into trainable low-rank matrices for fine-tuning. However, the random low-rank matrices may introduce massive task-irrelevant information, while their recomposed form suffer from limited representation spaces under low-rank operations. Such dense and choked adaptation in LoRA impairs the adaptation performance of LLMs on downstream tasks. To address these challenges, this paper proposes OHoRA, an orthogonal high-rank adaptation for parameter-efficient fine-tuning on LLMs. According to the information bottleneck theory, OHoRA decomposes LLMs’ pre-trained weight matrices into orthogonal basis vectors via QR decomposition and splits them into two low-redundancy high-rank components to suppress task-irrelevant information. It then performs dynamic rank-elevated recomposition through Kronecker product to generate expansive task-tailored representation spaces, enabling precise LLM adaptation and enhanced generalization. OHoRA effectively operationalizes the information bottleneck theory to decompose LLMs’ weight matrices into low-redundancy high-rank components and recompose them in rank-elevated manner for more task-tailored representation spaces and precise LLM adaptation. Empirical evaluation shows OHoRA’s effectiveness by outperforming LoRA and its variants and achieving comparable performance to full fine-tuning with only 0.0371% trainable parameters.
Anthology ID:
2025.emnlp-main.951
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18826–18844
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.951/
DOI:
Bibkey:
Cite (ACL):
Xin Zhang, Guang-Ze Chen, Shuzhen Li, Zhulin Liu, C.L.Philip Chen, and Tong Zhang. 2025. An Orthogonal High-Rank Adaptation for Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18826–18844, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
An Orthogonal High-Rank Adaptation for Large Language Models (Zhang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.951.pdf
Checklist:
 2025.emnlp-main.951.checklist.pdf