A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection

Daniele Bonadiman, Alessandro Moschitti


Abstract
An essential task of most Question Answering (QA) systems is to re-rank the set of answer candidates, i.e., Answer Sentence Selection (AS2). These candidates are typically sentences either extracted from one or more documents preserving their natural order or retrieved by a search engine. Most state-of-the-art approaches to the task use huge neural models, such as BERT, or complex attentive architectures. In this paper, we argue that by exploiting the intrinsic structure of the original rank together with an effective word-relatedness encoder, we achieve the highest accuracy among the cost-efficient models, with two orders of magnitude fewer parameters than the current state of the art. Our model takes 9.5 seconds to train on the WikiQA dataset, i.e., very fast in comparison with the 18 minutes required by a standard BERT-base fine-tuning.
Anthology ID:
2020.coling-main.457
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5211–5222
Language:
URL:
https://aclanthology.org/2020.coling-main.457
DOI:
10.18653/v1/2020.coling-main.457
Bibkey:
Cite (ACL):
Daniele Bonadiman and Alessandro Moschitti. 2020. A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5211–5222, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection (Bonadiman & Moschitti, COLING 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.coling-main.457.pdf
Data
GLUENatural QuestionsQNLISQuADWikiQA