Abstract
Sentiment style transfer (SST), a variant of text style transfer (TST), has recently attracted extensive interest. Some disentangling-based approaches have improved performance, while most still struggle to properly transfer the input as the sentiment style is intertwined with the content of the text. To alleviate the issue, we propose a plug-and-play method that leverages an iterative self-refinement algorithm with a large language model (LLM). Our approach separates the straightforward Seq2Seq generation into two phases: (1) Reduction phase which generates a style-free sequence for a given text, and (2) Synthesis phase which generates the target text by leveraging the sequence output from the first phase. The experimental results on two datasets demonstrate that our transfer strategy is effective for challenging SST cases where the baseline methods perform poorly. Our code is available online.- Anthology ID:
- 2024.inlg-main.28
- Volume:
- Proceedings of the 17th International Natural Language Generation Conference
- Month:
- September
- Year:
- 2024
- Address:
- Tokyo, Japan
- Editors:
- Saad Mahamood, Nguyen Le Minh, Daphne Ippolito
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 330–343
- Language:
- URL:
- https://aclanthology.org/2024.inlg-main.28
- DOI:
- Cite (ACL):
- Sheng Xu, Fumiyo Fukumoto, and Yoshimi Suzuki. 2024. Reduction-Synthesis: Plug-and-Play for Sentiment Style Transfer. In Proceedings of the 17th International Natural Language Generation Conference, pages 330–343, Tokyo, Japan. Association for Computational Linguistics.
- Cite (Informal):
- Reduction-Synthesis: Plug-and-Play for Sentiment Style Transfer (Xu et al., INLG 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.inlg-main.28.pdf