Zaifu Zhan


2025

pdf bib
Towards Better Multi-task Learning: A Framework for Optimizing Dataset Combinations in Large Language Models
Zaifu Zhan | Rui Zhang
Findings of the Association for Computational Linguistics: NAACL 2025

To efficiently select optimal dataset combinations for enhancing multi-task learning (MTL) performance in large language models, we proposed a novel framework that leverages a neural network to predict the best dataset combinations. The framework iteratively refines the selection, greatly improving efficiency, while being model-, dataset-, and domain-independent. Through experiments on 12 biomedical datasets across four tasks—named entity recognition, relation extraction, event extraction, and text classification—we demonstrate that our approach effectively identifies better combinations, even for tasks that may seem unpromising from a human perspective. This verifies that our framework provides a promising solution for maximizing MTL potential.