Abstract
Extractive QA models have shown very promising performance in predicting the correct answer to a question for a given passage. However, they sometimes result in predicting the correct answer text but in a context irrelevant to the given question. This discrepancy becomes especially important as the number of occurrences of the answer text in a passage increases. To resolve this issue, we propose BLANC (BLock AttentioN for Context prediction) based on two main ideas: context prediction as an auxiliary task in multi-task learning manner, and a block attention method that learns the context prediction task. With experiments on reading comprehension, we show that BLANC outperforms the state-of-the-art QA models, and the performance gap increases as the number of answer text occurrences increases. We also conduct an experiment of training the models using SQuAD and predicting the supporting facts on HotpotQA and show that BLANC outperforms all baseline models in this zero-shot setting.- Anthology ID:
- 2020.emnlp-main.189
- Volume:
- Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2418–2428
- Language:
- URL:
- https://aclanthology.org/2020.emnlp-main.189
- DOI:
- 10.18653/v1/2020.emnlp-main.189
- Cite (ACL):
- Yeon Seonwoo, Ji-Hoon Kim, Jung-Woo Ha, and Alice Oh. 2020. Context-Aware Answer Extraction in Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2418–2428, Online. Association for Computational Linguistics.
- Cite (Informal):
- Context-Aware Answer Extraction in Question Answering (Seonwoo et al., EMNLP 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2020.emnlp-main.189.pdf
- Code
- yeonsw/BLANC
- Data
- HotpotQA, Natural Questions, NewsQA, SQuAD