Continually Improving Extractive QA via Human Feedback

Ge Gao, Hung-Ting Chen, Yoav Artzi, Eunsol Choi


Abstract
We study continually improving an extractive question answering (QA) system via human user feedback. We design and deploy an iterative approach, where information-seeking users ask questions, receive model-predicted answers, and provide feedback. We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time. Our experiments show effective improvement from user feedback of extractive QA models over time across different data regimes, including significant potential for domain adaptation.
Anthology ID:
2023.emnlp-main.27
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
406–423
Language:
URL:
https://aclanthology.org/2023.emnlp-main.27
DOI:
10.18653/v1/2023.emnlp-main.27
Bibkey:
Cite (ACL):
Ge Gao, Hung-Ting Chen, Yoav Artzi, and Eunsol Choi. 2023. Continually Improving Extractive QA via Human Feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 406–423, Singapore. Association for Computational Linguistics.
Cite (Informal):
Continually Improving Extractive QA via Human Feedback (Gao et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.emnlp-main.27.pdf