Abstract
Conversational machine comprehension requires deep understanding of the dialogue flow, and the prior work proposed FlowQA to implicitly model the context representations in reasoning for better understanding. This paper proposes to explicitly model the information gain through the dialogue reasoning in order to allow the model to focus on more informative cues. The proposed model achieves the state-of-the-art performance in a conversational QA dataset QuAC and sequential instruction understanding dataset SCONE, which shows the effectiveness of the proposed mechanism and demonstrate its capability of generalization to different QA models and tasks.- Anthology ID:
- D19-5812
- Volume:
- Proceedings of the 2nd Workshop on Machine Reading for Question Answering
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Editors:
- Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen
- Venue:
- WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 86–90
- Language:
- URL:
- https://aclanthology.org/D19-5812
- DOI:
- 10.18653/v1/D19-5812
- Cite (ACL):
- Yi-Ting Yeh and Yun-Nung Chen. 2019. FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine Comprehension. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 86–90, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine Comprehension (Yeh & Chen, 2019)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/D19-5812.pdf
- Code
- MiuLab/FlowDelta
- Data
- CoQA, QuAC, SQuAD