Xiya Cheng
2021
Simple or Complex? Complexity-controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Sheng Bi
|
Xiya Cheng
|
Yuan-Fang Li
|
Lizhen Qu
|
Shirong Shen
|
Guilin Qi
|
Lu Pan
|
Yinlin Jiang
Findings of the Association for Computational Linguistics: EMNLP 2021
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation. In this paper, we propose an end-to-end neural complexity-controllable question generation model, which incorporates a mixture of experts (MoE) as the selector of soft templates to improve the accuracy of complexity control and the quality of generated questions. The soft templates capture question similarity while avoiding the expensive construction of actual templates. Our method introduces a novel, cross-domain complexity estimator to assess the complexity of a question, taking into account the passage, the question, the answer and their interactions. The experimental results on two benchmark QA datasets demonstrate that our QG model is superior to state-of-the-art methods in both automatic and manual evaluation. Moreover, our complexity estimator is significantly more accurate than the baselines in both in-domain and out-domain settings.
2020
Knowledge-enriched, Type-constrained and Grammar-guided Question Generation over Knowledge Bases
Sheng Bi
|
Xiya Cheng
|
Yuan-Fang Li
|
Yongzhen Wang
|
Guilin Qi
Proceedings of the 28th International Conference on Computational Linguistics
Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i.e. a set of triples. Two main challenges still face the current crop of encoder-decoder-based methods, especially on small subgraphs: (1) low diversity and poor fluency due to the limited information contained in the subgraphs, and (2) semantic drift due to the decoder’s oblivion of the semantics of the answer entity. We propose an innovative knowledge-enriched, type-constrained and grammar-guided KBQG model, named KTG, to addresses the above challenges. In our model, the encoder is equipped with auxiliary information from the KB, and the decoder is constrained with word types during QG. Specifically, entity domain and description, as well as relation hierarchy information are considered to construct question contexts, while a conditional copy mechanism is incorporated to modulate question semantics according to current word types. Besides, a novel reward function featuring grammatical similarity is designed to improve both generative richness and syntactic correctness via reinforcement learning. Extensive experiments show that our proposed model outperforms existing methods by a significant margin on two widely-used benchmark datasets SimpleQuestion and PathQuestion.
Search
Co-authors
- Sheng Bi 2
- Yuan-Fang Li 2
- Guilin Qi 2
- Lizhen Qu 1
- Shirong Shen 1
- show all...