TELLME: Test-Enhanced Learning for Language Model Enrichment
Minjun Kim, Inho Won, HyeonSeok Lim, MinKyu Kim, Junghun Yuk, Wooyoung Go, Jongyoul Park, Jungyeul Park, KyungTae Lim
Abstract
Continual pre-training (CPT) has been widely adopted as a method for domain expansion in large language models. However, CPT has consistently been accompanied by challenges, such as the difficulty of acquiring large-scale domain-specific datasets and high computational costs. In this study, we propose a novel method called Test-Enhanced Learning for Language Model Enrichment (TELLME) to alleviate these issues. TELLME leverages the Test-Enhanced Learning (TEL) principle, whereby the model’s learning efficiency is improved using quizzes during training. It integrates this principle with CPT, thereby promoting efficient domain-specific knowledge acquisition and long-term memory retention. Experimental results demonstrate that TELLME outperforms existing methods by up to 23.6% in the financial domain and achieves a 9.8% improvement in long-term memory retention.- Anthology ID:
- 2026.findings-eacl.84
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2026
- Month:
- March
- Year:
- 2026
- Address:
- Rabat, Morocco
- Editors:
- Vera Demberg, Kentaro Inui, Lluís Marquez
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1655–1677
- Language:
- URL:
- https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.84/
- DOI:
- Cite (ACL):
- Minjun Kim, Inho Won, HyeonSeok Lim, MinKyu Kim, Junghun Yuk, Wooyoung Go, Jongyoul Park, Jungyeul Park, and KyungTae Lim. 2026. TELLME: Test-Enhanced Learning for Language Model Enrichment. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1655–1677, Rabat, Morocco. Association for Computational Linguistics.
- Cite (Informal):
- TELLME: Test-Enhanced Learning for Language Model Enrichment (Kim et al., Findings 2026)
- PDF:
- https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.84.pdf