Exploring Question-Specific Rewards for Generating Deep Questions

Yuxi Xie, Liangming Pan, Dongzhe Wang, Min-Yen Kan, Yansong Feng


Abstract
Recent question generation (QG) approaches often utilize the sequence-to-sequence framework (Seq2Seq) to optimize the log likelihood of ground-truth questions using teacher forcing. However, this training objective is inconsistent with actual question quality, which is often reflected by certain global properties such as whether the question can be answered by the document. As such, we directly optimize for QG-specific objectives via reinforcement learning to improve question quality. We design three different rewards that target to improve the fluency, relevance, and answerability of generated questions. We conduct both automatic and human evaluations in addition to thorough analysis to explore the effect of each QG-specific reward. We find that optimizing on question-specific rewards generally leads to better performance in automatic evaluation metrics. However, only the rewards that correlate well with human judgement (e.g., relevance) lead to real improvement in question quality. Optimizing for the others, especially answerability, introduces incorrect bias to the model, resulting in poorer question quality. The code is publicly available at https://github.com/YuxiXie/RL-for-Question-Generation.
Anthology ID:
2020.coling-main.228
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
2534–2546
Language:
URL:
https://aclanthology.org/2020.coling-main.228
DOI:
10.18653/v1/2020.coling-main.228
Bibkey:
Cite (ACL):
Yuxi Xie, Liangming Pan, Dongzhe Wang, Min-Yen Kan, and Yansong Feng. 2020. Exploring Question-Specific Rewards for Generating Deep Questions. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2534–2546, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Exploring Question-Specific Rewards for Generating Deep Questions (Xie et al., COLING 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2020.coling-main.228.pdf
Code
 YuxiXie/RL-for-Question-Generation
Data
HotpotQA