Merging Continual Pretraining Models for Domain-Specialized LLMs: A Case Study in Finance

Kentaro Ueda, François Portet, Hirohiko Suwa, Keiichi Yasumoto


Abstract
While LLMs excel at general tasks, they struggle in specialized domains like finance, requiring diverse skills in domain knowledge, mathematical reasoning, and multilingual processing. Merging domain-specific Continual Pre-training (CPT) "experts" offers a practical alternative to costly and unstable multi-skill training. However, unlike established Supervised Fine-Tuning (SFT) model-based merging, CPT model merging remains largely unexplored. We address this gap by creating financial LLMs from experts in finance, math, and Japanese. We propose a three-stage evaluation focusing on knowledge recovery, complementarity, and emergence, and assess three merging methods (Task Arithmetic, TIES, and DARE-TIES) on a comprehensive financial benchmark curated from 18 tasks across 8 established datasets. Results show that merging an expert with its base model recovers general knowledge lost during CPT, while merging experts improves performance and can yield emergent cross-domain skills. Among the methods, Task Arithmetic performs strongly but is hyperparameter-sensitive, whereas TIES is more robust. Our findings also suggest that while model similarity correlates with merging success, emergent skills depend on more complex factors. This work presents the first foundational analysis of CPT model merging, establishing a principled framework and providing clear guidance for building multi-skill LLMs from existing assets.
Anthology ID:
2026.lrec-main.794
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
10114–10129
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.794/
DOI:
Bibkey:
Cite (ACL):
Kentaro Ueda, François Portet, Hirohiko Suwa, and Keiichi Yasumoto. 2026. Merging Continual Pretraining Models for Domain-Specialized LLMs: A Case Study in Finance. International Conference on Language Resources and Evaluation, main:10114–10129.
Cite (Informal):
Merging Continual Pretraining Models for Domain-Specialized LLMs: A Case Study in Finance (Ueda et al., LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.794.pdf