@inproceedings{xue-etal-2022-building,
    title = "Building a Knowledge-Based Dialogue System with Text Infilling",
    author = "Xue, Qiang  and
      Takiguchi, Tetsuya  and
      Ariki, Yasuo",
    editor = "Lemon, Oliver  and
      Hakkani-Tur, Dilek  and
      Li, Junyi Jessy  and
      Ashrafzadeh, Arash  and
      Garcia, Daniel Hern{\'a}ndez  and
      Alikhani, Malihe  and
      Vandyke, David  and
      Du{\v{s}}ek, Ond{\v{r}}ej",
    booktitle = "Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue",
    month = sep,
    year = "2022",
    address = "Edinburgh, UK",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.sigdial-1.25/",
    doi = "10.18653/v1/2022.sigdial-1.25",
    pages = "237--243",
    abstract = "In recent years, generation-based dialogue systems using state-of-the-art (SoTA) transformer-based models have demonstrated impressive performance in simulating human-like conversations. To improve the coherence and knowledge utilization capabilities of dialogue systems, knowledge-based dialogue systems integrate retrieved graph knowledge into transformer-based models. However, knowledge-based dialog systems sometimes generate responses without using the retrieved knowledge. In this work, we propose a method in which the knowledge-based dialogue system can constantly utilize the retrieved knowledge using text infilling . Text infilling is the task of predicting missing spans of a sentence or paragraph. We utilize this text infilling to enable dialog systems to fill incomplete responses with the retrieved knowledge. Our proposed dialogue system has been proven to generate significantly more correct responses than baseline dialogue systems."
}Markdown (Informal)
[Building a Knowledge-Based Dialogue System with Text Infilling](https://preview.aclanthology.org/ingest-emnlp/2022.sigdial-1.25/) (Xue et al., SIGDIAL 2022)
ACL