MoDEM: Mixture of Domain Expert Models

Toby Simonds, Kemal Kurniawan, Jey Han Lau


Abstract
We propose a novel approach to enhancing the performance and efficiency of large language models (LLMs) by combining domain prompt routing with domain-specialized models. We introduce a system that utilizes a BERT-based router to direct incoming prompts to the most appropriate domain expert model. These expert models are specifically tuned for domains such as health, mathematics and science. Our research demonstrates that this approach can significantly outperform general-purpose models of comparable size, leading to a superior performance-to-cost ratio across various benchmarks. The implications of this study suggest a potential shift in LLM development and deployment. Rather than focusing solely on creating increasingly large, general-purpose models, the future of AI may lie in developing ecosystems of smaller, highly specialized models coupled with sophisticated routing systems. This approach could lead to more efficient resource utilization, reduced computational costs, and superior overall performance.
Anthology ID:
2024.alta-1.6
Volume:
Proceedings of the 22nd Annual Workshop of the Australasian Language Technology Association
Month:
December
Year:
2024
Address:
Canberra, Australia
Editors:
Tim Baldwin, Sergio José Rodríguez Méndez, Nicholas Kuo
Venue:
ALTA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
75–88
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2024.alta-1.6/
DOI:
Bibkey:
Cite (ACL):
Toby Simonds, Kemal Kurniawan, and Jey Han Lau. 2024. MoDEM: Mixture of Domain Expert Models. In Proceedings of the 22nd Annual Workshop of the Australasian Language Technology Association, pages 75–88, Canberra, Australia. Association for Computational Linguistics.
Cite (Informal):
MoDEM: Mixture of Domain Expert Models (Simonds et al., ALTA 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2024.alta-1.6.pdf