@inproceedings{cao-etal-2019-controlling,
    title = "Controlling the Specificity of Clarification Question Generation",
    author = "Cao, Yang Trista  and
      Rao, Sudha  and
      Daum{\'e} III, Hal",
    editor = "Axelrod, Amittai  and
      Yang, Diyi  and
      Cunha, Rossana  and
      Shaikh, Samira  and
      Waseem, Zeerak",
    booktitle = "Proceedings of the 2019 Workshop on Widening NLP",
    month = aug,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W19-3619/",
    pages = "53--56",
    abstract = "Unlike comprehension-style questions, clarification questions look for some missing information in a given context. However, without guidance, neural models for question generation, similar to dialog generation models, lead to generic and bland questions that cannot elicit useful information. We argue that controlling the level of specificity of the generated questions can have useful applications and propose a neural clarification question generation model for the same. We first train a classifier that annotates a clarification question with its level of specificity (generic or specific) to the given context. Our results on the Amazon questions dataset demonstrate that training a clarification question generation model on specificity annotated data can generate questions with varied levels of specificity to the given context."
}Markdown (Informal)
[Controlling the Specificity of Clarification Question Generation](https://preview.aclanthology.org/iwcs-25-ingestion/W19-3619/) (Cao et al., WiNLP 2019)
ACL