Leveraging Low-resource Parallel Data for Text Style Transfer

Sourabrata Mukherjee, Ondrej Dusek


Abstract
Text style transfer (TST) involves transforming a text into a desired style while approximately preserving its content. The biggest challenge in TST in the general lack of parallel data. Many existing approaches rely on complex models using substantial non-parallel data, with mixed results. In this paper, we leverage a pretrained BART language model with minimal parallel data and incorporate low-resource methods such as hyperparameter tuning, data augmentation, and self-training, which have not been explored in TST. We further include novel style-based rewards in the training loss. Through extensive experiments in sentiment transfer, a sub-task of TST, we demonstrate that our simple yet effective approaches achieve well-balanced results, surpassing non-parallel approaches and highlighting the usefulness of parallel data even in small amounts.
Anthology ID:
2023.inlg-main.27
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
388–395
Language:
URL:
https://aclanthology.org/2023.inlg-main.27
DOI:
10.18653/v1/2023.inlg-main.27
Bibkey:
Cite (ACL):
Sourabrata Mukherjee and Ondrej Dusek. 2023. Leveraging Low-resource Parallel Data for Text Style Transfer. In Proceedings of the 16th International Natural Language Generation Conference, pages 388–395, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Leveraging Low-resource Parallel Data for Text Style Transfer (Mukherjee & Dusek, INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.inlg-main.27.pdf