Improving Controllable Text Generation with Position-Aware Weighted Decoding

Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin


Abstract
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation. However, these models often suffer from a control strength/fluency trade-off problem as higher control strength is more likely to generate incoherent and repetitive text. In this paper, we illustrate this trade-off is arisen by the controller imposing the target attribute on the LM at improper positions. And we propose a novel framework based on existing weighted decoding methods called CAT-PAW, which introduces a lightweight regulator to adjust bias signals from the controller at different decoding positions. Experiments on positive sentiment control, topic control, and language detoxification show the effectiveness of our CAT-PAW upon 4 SOTA models.
Anthology ID:
2022.findings-acl.272
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3449–3467
Language:
URL:
https://aclanthology.org/2022.findings-acl.272
DOI:
10.18653/v1/2022.findings-acl.272
Bibkey:
Cite (ACL):
Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, and Bing Qin. 2022. Improving Controllable Text Generation with Position-Aware Weighted Decoding. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3449–3467, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Improving Controllable Text Generation with Position-Aware Weighted Decoding (Gu et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2022.findings-acl.272.pdf
Data
IMDb Movie ReviewsSSTSST-5