ORAL: Prompting Your Large-Scale LoRAs via Conditional Recurrent Diffusion

Rana Shahroz, Dongwen Tang, Pingzhi Li, Kai Wang, Tianlong Chen


Abstract
Parameter generation has emerged as a novel paradigm for neural network development, offering an alternative to traditional neural network training by synthesizing high-quality model weights directly. In the context of Low-Rank Adaptation (LoRA) for evolving (i.e, constantly updated) large language models (LLMs), this approach promises efficient adaptation without costly retraining. However, existing methods face critical limitations in simultaneously achieving scalability and controllability. In this paper, we introduce ORAL, a novel conditional recurrent diffusion framework that addresses these challenges. ORAL incorporates a novel conditioning mechanism that integrates model architecture and textual task specifications, enabling the generation of task-specific LoRA parameters that can seamlessly transfer across evolving foundation models. Our approach successfully scales to billions-of-parameter LLMs and maintains controllability. Through extensive experiments across seven language tasks, four vision tasks, and three multimodal tasks using five pre-trained LLMs, we demonstrate that ORAL generates high-quality LoRA parameters that achieve comparable or superior performance to vanilla trained counterparts.
Anthology ID:
2025.findings-emnlp.71
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1357–1370
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.71/
DOI:
10.18653/v1/2025.findings-emnlp.71
Bibkey:
Cite (ACL):
Rana Shahroz, Dongwen Tang, Pingzhi Li, Kai Wang, and Tianlong Chen. 2025. ORAL: Prompting Your Large-Scale LoRAs via Conditional Recurrent Diffusion. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 1357–1370, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
ORAL: Prompting Your Large-Scale LoRAs via Conditional Recurrent Diffusion (Shahroz et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.71.pdf
Checklist:
 2025.findings-emnlp.71.checklist.pdf