Jongyoul Park
2026
ELO: Efficient Layer-Specific Optimization for Continual Pretraining of Multilingual LLMs
Hangyeol Yoo | ChangSu Choi | Minjun Kim | Seohyun Song | SeungWoo Song | Inho Won | Jongyoul Park | Cheoneum Park | KyungTae Lim
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Hangyeol Yoo | ChangSu Choi | Minjun Kim | Seohyun Song | SeungWoo Song | Inho Won | Jongyoul Park | Cheoneum Park | KyungTae Lim
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
We propose an efficient layer-specific optimization (ELO) method designed to enhance continual pretraining (CP) for specific languages in multilingual large language models (MLLMs). This approach addresses the common challenges of high computational cost and degradation of source language performance associated with traditional CP. The ELO method consists of two main stages: (1) ELO Pretraining, where a small subset of specific layers, identified in our experiments as the critically important first and last layers, are detached from the original MLLM and trained with the target language. This significantly reduces not only the number of trainable parameters but also the total parameters computed during the forward pass, minimizing GPU memory consumption and accelerating the training process. (2) Layer Alignment, where the newly trained layers are reintegrated into the original model, followed by a brief full fine-tuning step on a small dataset to align the parameters. Experimental results demonstrate that the ELO method achieves a training speedup of up to 6.46 times compared to existing methods, while improving target language performance by up to 6.2% on qualitative benchmarks and effectively preserving source language (English) capabilities.
TELLME: Test-Enhanced Learning for Language Model Enrichment
Minjun Kim | Inho Won | HyeonSeok Lim | MinKyu Kim | Junghun Yuk | Wooyoung Go | Jongyoul Park | Jungyeul Park | KyungTae Lim
Findings of the Association for Computational Linguistics: EACL 2026
Minjun Kim | Inho Won | HyeonSeok Lim | MinKyu Kim | Junghun Yuk | Wooyoung Go | Jongyoul Park | Jungyeul Park | KyungTae Lim
Findings of the Association for Computational Linguistics: EACL 2026
Continual pre-training (CPT) has been widely adopted as a method for domain expansion in large language models. However, CPT has consistently been accompanied by challenges, such as the difficulty of acquiring large-scale domain-specific datasets and high computational costs. In this study, we propose a novel method called Test-Enhanced Learning for Language Model Enrichment (TELLME) to alleviate these issues. TELLME leverages the Test-Enhanced Learning (TEL) principle, whereby the model’s learning efficiency is improved using quizzes during training. It integrates this principle with CPT, thereby promoting efficient domain-specific knowledge acquisition and long-term memory retention. Experimental results demonstrate that TELLME outperforms existing methods by up to 23.6% in the financial domain and achieves a 9.8% improvement in long-term memory retention.