@inproceedings{yifei-etal-2025-dislora,
    title = "{D}is{L}o{RA}: Task-specific Low-Rank Adaptation via Orthogonal Basis from Singular Value Decomposition",
    author = "Yifei, She  and
      Wei, Xinhao  and
      Wang, Yulong",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.694/",
    pages = "13751--13766",
    ISBN = "979-8-89176-332-6",
    abstract = "Parameter-efficient fine-tuning (PEFT) of large language models (LLMs) is critical for adapting to diverse downstream tasks with minimal computational cost. We propose **Di**rectional-**S**VD **Lo**w-**R**ank **A**daptation (DisLoRA), a novel PEFT framework that leverages singular value decomposition (SVD) to decompose pretrained weight matrices into orthogonal backbone and task-specific subspaces, enabling precise capture of task-specific directions (TSDs). By dynamically identifying TSDs and employing adaptive soft orthogonal regularization with mean-normalization mechanism, DisLoRA balances task-specific and orthogonal losses without manual tuning, ensuring robust training stability. Extensive experiments on GLUE and Commonsense Reasoning benchmarks demonstrate that DisLoRA surpasses established PEFT methods, including LoRA, PiSSA, DoRA, LoRA-Dash, and SORSA. DisLoRA achieves superior performance on multiple individual GLUE datasets, surpassing baselines by up to 10.28{\%} on SST-2 and 3.28{\%} on CoLA, and consistently attains higher average accuracy than baselines across Commonsense Reasoning Tasks, with a maximum gain of 3.1{\%}. These results demonstrate DisLoRA{'}s performance in efficient and high-performing LLM adaptation for domain-specific tasks while preserving generalization."
}Markdown (Informal)
[DisLoRA: Task-specific Low-Rank Adaptation via Orthogonal Basis from Singular Value Decomposition](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.694/) (Yifei et al., EMNLP 2025)
ACL