Jiajun Chai
2025
RLAE: Reinforcement Learning-Assisted Ensemble for LLMs
Yuqian Fu
|
Yuanheng Zhu
|
Jiajun Chai
|
Guojun Yin
|
Wei Lin
|
Qichao Zhang
|
Dongbin Zhao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Ensembling large language models (LLMs) can effectively combine diverse strengths of different models, offering a promising approach to enhance performance across various tasks. However, existing methods typically rely on fixed weighting strategies that fail to adapt to the dynamic, context-dependent characteristics of LLM capabilities. In this work, we propose **R**einforcement **L**earning-**A**ssisted **E**nsemble for LLMs (RLAE), a novel framework that reformulates LLM ensemble through the lens of a Markov Decision Process (MDP). Our approach introduces a RL agent that dynamically adjusts ensemble weights by considering both input context and intermediate generation states, with the agent being trained using rewards that directly correspond to the quality of final outputs. We implement RLAE using both single-agent and multi-agent reinforcement learning algorithms (RLAE_PPO and RLAE_MAPPO ), demonstrating substantial improvements over conventional ensemble methods. Extensive evaluations on a diverse set of tasks show that RLAE outperforms existing approaches by up to 3.3\\% accuracy points, offering a more effective framework for LLM ensembling. Furthermore, our method exhibits superior generalization capabilities across different tasks without the need for retraining, while simultaneously achieving lower time latency. The source code is available at here.
Search
Fix author
Co-authors
- Yuqian Fu 1
- Wei Lin 1
- Guojun Yin 1
- Qichao Zhang 1
- Dongbin Zhao 1
- show all...