Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review

Zhuochun Li, Yuelyu Ji, Rui Meng, Daqing He


Abstract
While reasoning capabilities typically emerge in large language models (LLMs) with tens of billions of parameters, recent research focuses on improving smaller open-source models through knowledge distillation (KD) from commercial LLMs. However, many of these studies rely solely on responses from a single LLM as the gold rationale, unlike the natural human learning process, which involves understanding both the correct answers and the reasons behind mistakes. In this paper, we introduce a novel Fault-Aware DistIllation via Peer-Review (FAIR) approach: 1) instead of merely obtaining rationales from teachers, our method asks teachers to identify and explain the student’s mistakes, providing customized instruction learning data; 2) we design a simulated peer-review process between teacher LLMs, and selects only the generated rationales above the acceptance threshold, which reduces the chance of teachers guessing correctly with flawed rationale, improving instructional data quality. Comprehensive experiments and analysis on mathematical, commonsense, and logical reasoning tasks demonstrate the effectiveness of our method. Our code is available at https://github.com/zhuochunli/Learn-from-Committee.
Anthology ID:
2025.findings-acl.217
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4190–4205
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.217/
DOI:
Bibkey:
Cite (ACL):
Zhuochun Li, Yuelyu Ji, Rui Meng, and Daqing He. 2025. Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review. In Findings of the Association for Computational Linguistics: ACL 2025, pages 4190–4205, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review (Li et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.217.pdf