DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models

Chengyu Wang, Junbing Yan, Yuanhao Yue, Jun Huang


Abstract
Enhancing computational efficiency and reducing deployment costs for large language models (LLMs) have become critical challenges in various resource-constrained scenarios. In this work, we present DistilQwen2.5, a family of distilled, lightweight LLMs derived from the public Qwen2.5 models. These distilled models exhibit enhanced instruction-following capabilities compared to the original models based on a series of distillation techniques that incorporate knowledge from much larger LLMs. In our industrial practice, we first leverage powerful proprietary LLMs with varying capacities as multi-agent teachers to select, rewrite, and refine instruction-response pairs that are more suitable for student LLMs to learn. After standard fine-tuning, we further leverage a computationally efficient model fusion approach that enables student models to progressively integrate fine-grained hidden knowledge from their teachers. Experimental evaluations demonstrate that the distilled models possess significantly stronger capabilities than their original checkpoints. Additionally, we present use cases to illustrate the applications of our framework in real-world scenarios. To facilitate practical use, we have released all the DistilQwen2.5 models to the open-source community.
Anthology ID:
2025.acl-industry.4
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Georg Rehm, Yunyao Li
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
32–42
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-industry.4/
DOI:
Bibkey:
Cite (ACL):
Chengyu Wang, Junbing Yan, Yuanhao Yue, and Jun Huang. 2025. DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track), pages 32–42, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models (Wang et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-industry.4.pdf