Abstract
The finetuning of pretrained transformer-based language generation models are typically conducted in an end-to-end manner, where the model learns to attend to relevant parts of the input by itself. However, there does not exist a mechanism to directly control the model’s focus. This work aims to develop a control mechanism by which a user can select spans of context as “highlights” for the model to focus on, and generate relevant output. To achieve this goal, we augment a pretrained model with trainable “focus vectors” that are directly applied to the model’s embeddings, while the model itself is kept fixed. These vectors, trained on automatic annotations derived from attribution methods, act as indicators for context importance. We test our approach on two core generation tasks: dialogue response generation and abstractive summarization. We also collect evaluation data where the highlight-generation pairs are annotated by humans. Our experiments show that the trained focus vectors are effective in steering the model to generate outputs that are relevant to user-selected highlights.- Anthology ID:
- 2022.findings-acl.260
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3291–3306
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.260
- DOI:
- 10.18653/v1/2022.findings-acl.260
- Cite (ACL):
- Jiabao Ji, Yoon Kim, James Glass, and Tianxing He. 2022. Controlling the Focus of Pretrained Language Generation Models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3291–3306, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Controlling the Focus of Pretrained Language Generation Models (Ji et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/2022.findings-acl.260.pdf
- Code
- question406/learningtofocus
- Data
- CNN/Daily Mail