On Learning Text Style Transfer with Direct Rewards

Yixin Liu, Graham Neubig, John Wieting


Abstract
In most cases, the lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task. In this paper, we explore training algorithms that instead optimize reward functions that explicitly consider different aspects of the style-transferred outputs. In particular, we leverage semantic similarity metrics originally used for fine-tuning neural machine translation models to explicitly assess the preservation of content between system outputs and input texts. We also investigate the potential weaknesses of the existing automatic metrics and propose efficient strategies of using these metrics for training. The experimental results show that our model provides significant gains in both automatic and human evaluation over strong baselines, indicating the effectiveness of our proposed methods and training strategies.
Anthology ID:
2021.naacl-main.337
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4262–4273
Language:
URL:
https://aclanthology.org/2021.naacl-main.337
DOI:
10.18653/v1/2021.naacl-main.337
Bibkey:
Cite (ACL):
Yixin Liu, Graham Neubig, and John Wieting. 2021. On Learning Text Style Transfer with Direct Rewards. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4262–4273, Online. Association for Computational Linguistics.
Cite (Informal):
On Learning Text Style Transfer with Direct Rewards (Liu et al., NAACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.naacl-main.337.pdf
Video:
 https://preview.aclanthology.org/auto-file-uploads/2021.naacl-main.337.mp4
Code
 yixinL7/Direct-Style-Transfer
Data
GYAFC