Interactive Instance-based Evaluation of Knowledge Base Question Answering

Daniil Sorokin, Iryna Gurevych


Abstract
Most approaches to Knowledge Base Question Answering are based on semantic parsing. In this paper, we present a tool that aids in debugging of question answering systems that construct a structured semantic representation for the input question. Previous work has largely focused on building question answering interfaces or evaluation frameworks that unify multiple data sets. The primary objective of our system is to enable interactive debugging of model predictions on individual instances (questions) and to simplify manual error analysis. Our interactive interface helps researchers to understand the shortcomings of a particular model, qualitatively analyze the complete pipeline and compare different models. A set of sit-by sessions was used to validate our interface design.
Anthology ID:
D18-2020
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Eduardo Blanco, Wei Lu
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
114–119
Language:
URL:
https://aclanthology.org/D18-2020
DOI:
10.18653/v1/D18-2020
Bibkey:
Cite (ACL):
Daniil Sorokin and Iryna Gurevych. 2018. Interactive Instance-based Evaluation of Knowledge Base Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 114–119, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Interactive Instance-based Evaluation of Knowledge Base Question Answering (Sorokin & Gurevych, EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/D18-2020.pdf
Code
 UKPLab/emnlp2018-question-answering-interface