Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering

Wang Zhu, Jesse Thomason, Robin Jia


Abstract
We propose Chain-of-Questions, a framework that trains a model to robustly answer multistep questions by generating and answering sub-questions. We obtain supervision for sub-questions from human-annotated question decomposition meaning representation (QDMR), but QDMR does not include annotated answers to sub-questions. To overcome this technical challenge, we treat sub-answers as latent variables and infer them with a novel dynamic mixture of Hard-EM and MAPO. Chain-of-Questions is effective and robust, greatly outperforming strong neuro-symbolic methods by 9.0 F1 on a DROP contrast set and GPT-3.5 by 24.3 F1 on a HotpotQA adversarial set.
Anthology ID:
2023.emnlp-main.547
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8845–8860
Language:
URL:
https://aclanthology.org/2023.emnlp-main.547
DOI:
10.18653/v1/2023.emnlp-main.547
Bibkey:
Cite (ACL):
Wang Zhu, Jesse Thomason, and Robin Jia. 2023. Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8845–8860, Singapore. Association for Computational Linguistics.
Cite (Informal):
Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering (Zhu et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.emnlp-main.547.pdf