Elaboration-Generating Commonsense Question Answering at Scale

Wenya Wang, Vivek Srikumar, Hannaneh Hajishirzi, Noah A. Smith


Abstract
In question answering requiring common sense, language models (e.g., GPT-3) have been used to generate text expressing background knowledge that helps improve performance. Yet the cost of working with such models is very high; in this work, we finetune smaller language models to generate useful intermediate context, referred to here as elaborations. Our framework alternates between updating two language models—an elaboration generator and an answer predictor—allowing each to influence the other. Using less than 0.5% of the parameters of GPT-3, our model outperforms alternatives with similar sizes and closes the gap with GPT-3 on four commonsense question answering benchmarks. Human evaluations show that the quality of the generated elaborations is high.
Anthology ID:
2023.acl-long.90
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1619–1635
Language:
URL:
https://aclanthology.org/2023.acl-long.90
DOI:
10.18653/v1/2023.acl-long.90
Bibkey:
Cite (ACL):
Wenya Wang, Vivek Srikumar, Hannaneh Hajishirzi, and Noah A. Smith. 2023. Elaboration-Generating Commonsense Question Answering at Scale. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1619–1635, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Elaboration-Generating Commonsense Question Answering at Scale (Wang et al., ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.acl-long.90.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2023.acl-long.90.mp4