Abstract
How to generate summaries of different styles without requiring corpora in the target styles, or training separate models? We present two novel methods that can be deployed during summary decoding on any pre-trained Transformer-based summarization model. (1) Decoder state adjustment instantly modifies decoder final states with externally trained style scorers, to iteratively refine the output against a target style. (2) Word unit prediction constrains the word usage to impose strong lexical control during generation. In experiments of summarizing with simplicity control, automatic evaluation and human judges both find our models producing outputs in simpler languages while still informative. We also generate news headlines with various ideological leanings, which can be distinguished by humans with a reasonable probability.- Anthology ID:
- 2021.naacl-main.476
- Volume:
- Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Editors:
- Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5942–5953
- Language:
- URL:
- https://aclanthology.org/2021.naacl-main.476
- DOI:
- 10.18653/v1/2021.naacl-main.476
- Cite (ACL):
- Shuyang Cao and Lu Wang. 2021. Inference Time Style Control for Summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5942–5953, Online. Association for Computational Linguistics.
- Cite (Informal):
- Inference Time Style Control for Summarization (Cao & Wang, NAACL 2021)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2021.naacl-main.476.pdf