Zhenhe Zhang
2024
PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of LLMs
An Liu
|
Zonghan Yang
|
Zhenhe Zhang
|
Qingyuan Hu
|
Peng Li
|
Ming Yan
|
Ji Zhang
|
Fei Huang
|
Yang Liu
Findings of the Association for Computational Linguistics ACL 2024
While Large language models (LLMs) have demonstrated considerable capabilities across various natural language tasks, they often fall short of the performance achieved by domain-specific state-of-the-art models. One potential approach to enhance domain-specific capabilities of LLMs involves fine-tuning them using corresponding datasets. However, this method can be both resource and time-intensive, and not applicable to closed-source commercial LLMs. In this paper, we propose Preference Adaptation for Enhancing Domain-specific Abilities of LLMs (PANDA), a method designed to augment the domain-specific capabilities of LLMs by leveraging insights from the response preference of expert models without requiring fine-tuning. Our experimental results reveal that PANDA significantly enhances the domain-specific ability of LLMs on text classification and interactive decision tasks. Moreover, LLM with PANDA even outperforms the expert model that being learned on 4 tasks of ScienceWorld. This finding highlights the potential of exploring tuning-free approaches to achieve weak-to-strong generalization.
Search
Co-authors
- An Liu 1
- Zonghan Yang 1
- Qingyuan Hu 1
- Peng Li 1
- Ming Yan 1
- show all...