@inproceedings{li-etal-2019-deep,
    title = "Deep Reinforcement Learning with Distributional Semantic Rewards for Abstractive Summarization",
    author = "Li, Siyao  and
      Lei, Deren  and
      Qin, Pengda  and
      Wang, William Yang",
    editor = "Inui, Kentaro  and
      Jiang, Jing  and
      Ng, Vincent  and
      Wan, Xiaojun",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/D19-1623/",
    doi = "10.18653/v1/D19-1623",
    pages = "6038--6044",
    abstract = "Deep reinforcement learning (RL) has been a commonly-used strategy for the abstractive summarization task to address both the exposure bias and non-differentiable task issues. However, the conventional reward Rouge-L simply looks for exact n-grams matches between candidates and annotated references, which inevitably makes the generated sentences repetitive and incoherent. In this paper, instead of Rouge-L, we explore the practicability of utilizing the distributional semantics to measure the matching degrees. With distributional semantics, sentence-level evaluation can be obtained, and semantically-correct phrases can also be generated without being limited to the surface form of the reference sentences. Human judgments on Gigaword and CNN/Daily Mail datasets show that our proposed distributional semantics reward (DSR) has distinct superiority in capturing the lexical and compositional diversity of natural language."
}Markdown (Informal)
[Deep Reinforcement Learning with Distributional Semantic Rewards for Abstractive Summarization](https://preview.aclanthology.org/ingest-emnlp/D19-1623/) (Li et al., EMNLP-IJCNLP 2019)
ACL