FRAME: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy
Xuemiao Zhang, Feiyu Duan, Xu Liangyu, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, Xunliang Cai
Abstract
Large language models (LLMs) have significantly advanced human language understanding and generation, with pretraining data quality and organization being crucial to their performance. Multi-stage pretraining is a promising approach, but existing methods often lack quantitative criteria for data partitioning and instead rely on intuitive heuristics. In this paper, we propose the novel Four-quadRAnt Multi-stage prEtraining strategy (FRAME), guided by the established principle of organizing the pretraining process into four stages to achieve significant loss reductions four times. This principle is grounded in two key findings: first, training on high Perplexity (PPL) data followed by low PPL data, and second, training on low PPL difference (PD) data followed by high PD data, both causing the loss to drop significantly twice and performance enhancements. By partitioning data into four quadrants and strategically organizing them, FRAME achieves a remarkable 16.8% average improvement over random across MMLU and CMMLU for the 3B model, effectively boosting LLM performance.- Anthology ID:
- 2025.findings-acl.1040
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 20278–20297
- Language:
- URL:
- https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.1040/
- DOI:
- 10.18653/v1/2025.findings-acl.1040
- Cite (ACL):
- Xuemiao Zhang, Feiyu Duan, Xu Liangyu, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, and Xunliang Cai. 2025. FRAME: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy. In Findings of the Association for Computational Linguistics: ACL 2025, pages 20278–20297, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- FRAME: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy (Zhang et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.1040.pdf