Yuxuan Yao
2025
LoRE-Merging: Exploring Low-Rank Estimation For Large Language Model Merging
Zehua Liu
|
Han Wu
|
Yuxuan Yao
|
Xiaojin Fu
|
Ruifeng She
|
Xiongwei Han
|
Tao Zhong
|
Mingxuan Yuan
Findings of the Association for Computational Linguistics: EMNLP 2025
While most current approaches rely on further training techniques, such as fine-tuning or reinforcement learning, to enhance model capacities, model merging stands out for its ability of improving models without requiring any additional training. In this paper, we propose a unified framework for model merging based on low-rank estimation of task vectors without the need for access to the base model, named LoRE-Merging. Our approach is motivated by the observation that task vectors from fine-tuned models frequently exhibit a limited number of dominant singular values, making low-rank estimations less prone to interference. We implement the method by formulating the merging problem as an optimization problem. Extensive empirical experiments demonstrate the effectiveness of our framework in mitigating interference and preserving task-specific information, thereby advancing the state-of-the-art performance in model merging techniques.
2023
Fine-grained Conversational Decoding via Isotropic and Proximal Search
Yuxuan Yao
|
Han Wu
|
Qiling Xu
|
Linqi Song
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
General-purpose text decoding approaches are usually adopted for dialogue response generation. Although the quality of the generated responses can be improved with dialogue-specific encoding methods, conversational decoding methods are still under-explored. Inspired by SimDRC that a good dialogue feature space should follow the rules of locality and isotropy, we present a fine-grained conversational decoding method, termed isotropic and proximal search (IPS). Our method is designed to generate the semantic-concentrated response, while still maintaining informativeness and discrimination against the context. Experiments show that our approach significantly outperforms existing decoding strategies in the dialogue field across both automatic and human evaluation metrics. More in-depth analyses further confirm the effectiveness of our approach.