Abstract
Abstractive text summarization aims at generating human-like summaries by understanding and paraphrasing the given input content. Recent efforts based on sequence-to-sequence networks only allow the generation of a single summary. However, it is often desirable to accommodate the psycho-linguistic preferences of the intended audience while generating the summaries. In this work, we present a reinforcement learning based approach to generate formality-tailored summaries for an input article. Our novel input-dependent reward function aids in training the model with stylistic feedback on sampled and ground-truth summaries together. Once trained, the same model can generate formal and informal summary variants. Our automated and qualitative evaluations show the viability of the proposed framework.- Anthology ID:
- K19-1078
- Volume:
- Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Editors:
- Mohit Bansal, Aline Villavicencio
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 833–842
- Language:
- URL:
- https://preview.aclanthology.org/add_missing_videos/K19-1078/
- DOI:
- 10.18653/v1/K19-1078
- Cite (ACL):
- Kushal Chawla, Balaji Vasan Srinivasan, and Niyati Chhaya. 2019. Generating Formality-Tuned Summaries Using Input-Dependent Rewards. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 833–842, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- Generating Formality-Tuned Summaries Using Input-Dependent Rewards (Chawla et al., CoNLL 2019)
- PDF:
- https://preview.aclanthology.org/add_missing_videos/K19-1078.pdf