Eliciting Bias in Question Answering Models through Ambiguity

Andrew Mao, Naveen Raman, Matthew Shu, Eric Li, Franklin Yang, Jordan Boyd-Graber


Abstract
Question answering (QA) models use retriever and reader systems to answer questions. Reliance on training data by QA systems can amplify or reflect inequity through their responses. Many QA models, such as those for the SQuAD dataset, are trained and tested on a subset of Wikipedia articles which encode their own biases and also reproduce real-world inequality. Understanding how training data affects bias in QA systems can inform methods to mitigate inequity. We develop two sets of questions for closed and open domain questions respectively, which use ambiguous questions to probe QA models for bias. We feed three deep-learning-based QA systems with our question sets and evaluate responses for bias via the metrics. Using our metrics, we find that open-domain QA models amplify biases more than their closed-domain counterparts and propose that biases in the retriever surface more readily due to greater freedom of choice.
Anthology ID:
2021.mrqa-1.9
Volume:
Proceedings of the 3rd Workshop on Machine Reading for Question Answering
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Adam Fisch, Alon Talmor, Danqi Chen, Eunsol Choi, Minjoon Seo, Patrick Lewis, Robin Jia, Sewon Min
Venue:
MRQA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
92–99
Language:
URL:
https://aclanthology.org/2021.mrqa-1.9
DOI:
10.18653/v1/2021.mrqa-1.9
Bibkey:
Cite (ACL):
Andrew Mao, Naveen Raman, Matthew Shu, Eric Li, Franklin Yang, and Jordan Boyd-Graber. 2021. Eliciting Bias in Question Answering Models through Ambiguity. In Proceedings of the 3rd Workshop on Machine Reading for Question Answering, pages 92–99, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Eliciting Bias in Question Answering Models through Ambiguity (Mao et al., MRQA 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2021.mrqa-1.9.pdf
Code
 axz5fy3e6fq07q13/emnlp_bias
Data
SQuAD