Abstract
We propose a question answering (QA) approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct. Our method first identifies the actual information needed in a question using psycholinguistic concreteness norms, then uses this information need to construct answer justifications by aggregating multiple sentences from different knowledge bases using syntactic and lexical information. We then jointly rank answers and their justifications using a reranking perceptron that treats justification quality as a latent variable. We evaluate our method on 1,000 multiple-choice questions from elementary school science exams, and empirically demonstrate that it performs better than several strong baselines, including neural network approaches. Our best configuration answers 44% of the questions correctly, where the top justifications for 57% of these correct answers contain a compelling human-readable justification that explains the inference required to arrive at the correct answer. We include a detailed characterization of the justification quality for both our method and a strong baseline, and show that information aggregation is key to addressing the information need in complex questions.- Anthology ID:
- J17-2005
- Volume:
- Computational Linguistics, Volume 43, Issue 2 - June 2017
- Month:
- June
- Year:
- 2017
- Address:
- Cambridge, MA
- Venue:
- CL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 407–449
- Language:
- URL:
- https://aclanthology.org/J17-2005
- DOI:
- 10.1162/COLI_a_00287
- Cite (ACL):
- Peter Jansen, Rebecca Sharp, Mihai Surdeanu, and Peter Clark. 2017. Framing QA as Building and Ranking Intersentence Answer Justifications. Computational Linguistics, 43(2):407–449.
- Cite (Informal):
- Framing QA as Building and Ranking Intersentence Answer Justifications (Jansen et al., CL 2017)
- PDF:
- https://preview.aclanthology.org/fix-dup-bibkey/J17-2005.pdf