Trick Me If You Can: Adversarial Writing of Trivia Challenge Questions

Eric Wallace, Jordan Boyd-Graber


Abstract
Modern question answering systems have been touted as approaching human performance. However, existing question answering datasets are imperfect tests. Questions are written with humans in mind, not computers, and often do not properly expose model limitations. To address this, we develop an adversarial writing setting, where humans interact with trained models and try to break them. This annotation process yields a challenge set, which despite being easy for trivia players to answer, systematically stumps automated question answering systems. Diagnosing model errors on the evaluation data provides actionable insights to explore in developing robust and generalizable question answering systems.
Anthology ID:
P18-3018
Volume:
Proceedings of ACL 2018, Student Research Workshop
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Vered Shwartz, Jeniya Tabassum, Rob Voigt, Wanxiang Che, Marie-Catherine de Marneffe, Malvina Nissim
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
127–133
Language:
URL:
https://aclanthology.org/P18-3018
DOI:
10.18653/v1/P18-3018
Bibkey:
Cite (ACL):
Eric Wallace and Jordan Boyd-Graber. 2018. Trick Me If You Can: Adversarial Writing of Trivia Challenge Questions. In Proceedings of ACL 2018, Student Research Workshop, pages 127–133, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Trick Me If You Can: Adversarial Writing of Trivia Challenge Questions (Wallace & Boyd-Graber, ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/P18-3018.pdf
Data
TriviaQA