Simulating Bandit Learning from User Feedback for Extractive Question Answering

Ge Gao, Eunsol Choi, Yoav Artzi


Abstract
We study learning from user feedback for extractive question answering by simulating feedback using supervised data. We cast the problem as contextual bandit learning, and analyze the characteristics of several learning scenarios with focus on reducing data annotation. We show that systems initially trained on few examples can dramatically improve given feedback from users on model-predicted answers, and that one can use existing datasets to deploy systems in new domains without any annotation effort, but instead improving the system on-the-fly via user feedback.
Anthology ID:
2022.acl-long.355
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5167–5179
Language:
URL:
https://aclanthology.org/2022.acl-long.355
DOI:
10.18653/v1/2022.acl-long.355
Bibkey:
Cite (ACL):
Ge Gao, Eunsol Choi, and Yoav Artzi. 2022. Simulating Bandit Learning from User Feedback for Extractive Question Answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5167–5179, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Simulating Bandit Learning from User Feedback for Extractive Question Answering (Gao et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.acl-long.355.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.acl-long.355.mp4
Code
 lil-lab/bandit-qa
Data
HotpotQAMRQANatural QuestionsNewsQASQuADSearchQATriviaQA