How to inject knowledge efficiently? Knowledge Infusion Scaling Law for Pre-training Large Language Models

Kangtao Lv, Haibin Chen, Yujin Yuan, Langming Liu, Shilei Liu, Yongwei Wang, Wenbo Su, Bo Zheng


Abstract
Large language models (LLMs) have attracted significant attention due to their impressive general capabilities across diverse downstream tasks. However, without domain-specific optimization, they often underperform on specialized knowledge benchmarks and even produce hallucination. Recent studies show that strategically infusing domain knowledge during pretraining can substantially improve downstream performance. A critical challenge lies in balancing this infusion trade-off: injecting too little domain-specific data yields insufficient specialization, whereas excessive infusion triggers catastrophic forgetting of previously acquired knowledge. In this work, we focus on the phenomenon of memory collapse induced by over-infusion. Through systematic experiments, we make two key observations, i.e. 1) Critical collapse point: each model exhibits a threshold beyond which its knowledge retention capabilities sharply degrade. 2) Scale correlation: these collapse points scale consistently with the model’s size. Building on these insights, we propose a knowledge infusion scaling law that predicts the optimal amount of domain knowledge to inject into large LLMs by analyzing their smaller counterparts. Extensive experiments across different model sizes and pertaining token budgets validate both the effectiveness and generalizability of our scaling law.
Anthology ID:
2025.emnlp-main.1331
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26204–26219
Language:
URL:
https://preview.aclanthology.org/corrections-2025-11/2025.emnlp-main.1331/
DOI:
10.18653/v1/2025.emnlp-main.1331
Bibkey:
Cite (ACL):
Kangtao Lv, Haibin Chen, Yujin Yuan, Langming Liu, Shilei Liu, Yongwei Wang, Wenbo Su, and Bo Zheng. 2025. How to inject knowledge efficiently? Knowledge Infusion Scaling Law for Pre-training Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 26204–26219, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
How to inject knowledge efficiently? Knowledge Infusion Scaling Law for Pre-training Large Language Models (Lv et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-11/2025.emnlp-main.1331.pdf
Checklist:
 2025.emnlp-main.1331.checklist.pdf