@inproceedings{cui-etal-2019-kb,
    title = "{KB}-{NLG}: From Knowledge Base to Natural Language Generation",
    author = "Cui, Wen  and
      Zhou, Minghui  and
      Zhao, Rongwen  and
      Norouzi, Narges",
    editor = "Axelrod, Amittai  and
      Yang, Diyi  and
      Cunha, Rossana  and
      Shaikh, Samira  and
      Waseem, Zeerak",
    booktitle = "Proceedings of the 2019 Workshop on Widening NLP",
    month = aug,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W19-3626/",
    pages = "80--82",
    abstract = "We perform the natural language generation (NLG) task by mapping sets of Resource Description Framework (RDF) triples into text. First we investigate the impact of increasing the number of entity types in delexicalisaiton on the generation quality. Second we conduct different experiments to evaluate two widely applied language generation systems, encoder-decoder with attention and the Transformer model on a large benchmark dataset. We evaluate different models on automatic metrics, as well as the training time. To our knowledge, we are the first to apply Transformer model to this task."
}Markdown (Informal)
[KB-NLG: From Knowledge Base to Natural Language Generation](https://preview.aclanthology.org/iwcs-25-ingestion/W19-3626/) (Cui et al., WiNLP 2019)
ACL