Personalized Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation

Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty


Abstract
With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are increasing interests in distilling the capabilies of close-sourced LLMs to smaller open-sourced LLMs. Previous distillation methods usually prompt ChatGPT to generate a set of instructions and answers, for the student model to learn. However, such standard distillation approach neglects the merits and conditions of the student model. Inspired by modern teaching principles, we design a personalised distillation process, in which the student attempts to solve a task first, then the teacher provides an adaptive refinement for the student to improve. Instead of feeding the student with teacher’s prior, personalised distillation enables personalised learning for the student model, as it only learns on examples it makes mistakes upon and learns to improve its own solution. On code generation, personalised distillation consistently outperforms standard distillation with only one third of the data. With only 2.5-3K personalised examples that incur a data-collection cost of 4-6$, we boost CodeGen-mono-16B by 7% to achieve 36.4% pass@1 and StarCoder by 12.2% to achieve 45.8% pass@1 on HumanEval.
Anthology ID:
2023.emnlp-main.417
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6737–6749
Language:
URL:
https://aclanthology.org/2023.emnlp-main.417
DOI:
10.18653/v1/2023.emnlp-main.417
Bibkey:
Cite (ACL):
Hailin Chen, Amrita Saha, Steven Hoi, and Shafiq Joty. 2023. Personalized Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6737–6749, Singapore. Association for Computational Linguistics.
Cite (Informal):
Personalized Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation (Chen et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.emnlp-main.417.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2023.emnlp-main.417.mp4