Towards Better Multi-task Learning: A Framework for Optimizing Dataset Combinations in Large Language Models

Zaifu Zhan, Rui Zhang


Abstract
To efficiently select optimal dataset combinations for enhancing multi-task learning (MTL) performance in large language models, we proposed a novel framework that leverages a neural network to predict the best dataset combinations. The framework iteratively refines the selection, greatly improving efficiency, while being model-, dataset-, and domain-independent. Through experiments on 12 biomedical datasets across four tasks—named entity recognition, relation extraction, event extraction, and text classification—we demonstrate that our approach effectively identifies better combinations, even for tasks that may seem unpromising from a human perspective. This verifies that our framework provides a promising solution for maximizing MTL potential.
Anthology ID:
2025.findings-naacl.297
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5373–5386
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.297/
DOI:
Bibkey:
Cite (ACL):
Zaifu Zhan and Rui Zhang. 2025. Towards Better Multi-task Learning: A Framework for Optimizing Dataset Combinations in Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5373–5386, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Towards Better Multi-task Learning: A Framework for Optimizing Dataset Combinations in Large Language Models (Zhan & Zhang, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.297.pdf