Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems

Yoshitomo Matsubara, Luca Soldaini, Eric Lind, Alessandro Moschitti


Abstract
Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this paper, we explore the following research question: How can we make the AS2 models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (named CERBERUS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of heterogeneous transformer models by using just a few extra parameters. We show the effectiveness of CERBERUS on three English datasets for AS2; our proposed approach outperforms all single-model distillations we consider, rivaling the state-of-the-art large AS2 models that have 2.7× more parameters and run 2.5× slower. Code for our model is available at https://github.com/amazon-research/wqa-cerberus.
Anthology ID:
2022.findings-emnlp.537
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7259–7272
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.537
DOI:
10.18653/v1/2022.findings-emnlp.537
Bibkey:
Cite (ACL):
Yoshitomo Matsubara, Luca Soldaini, Eric Lind, and Alessandro Moschitti. 2022. Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 7259–7272, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems (Matsubara et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.findings-emnlp.537.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2022.findings-emnlp.537.mp4