Efficient (Soft) Q-Learning for Text Generation with Limited Good Data

Han Guo, Bowen Tan, Zhengzhong Liu, Eric Xing, Zhiting Hu


Abstract
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models. This paradigm relies on direct supervision examples, which is not applicable to many emerging applications, such as generating adversarial attacks or generating prompts to control language models. Reinforcement learning (RL) on the other hand offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward. Yet previous RL algorithms for text generation, such as policy gradient (on-policy RL) and Q-learning (off-policy RL), are often notoriously inefficient or unstable to train due to the large sequence space and the sparse reward received only at the end of sequences. In this paper, we introduce a new RL formulation for text generation from the soft Q-learning (SQL) perspective. It enables us to draw from the latest RL advances, such as path consistency learning, to combine the best of on-/off-policy updates, and learn effectively from sparse reward. We apply the approach to a wide range of novel text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation. Experiments show our approach consistently outperforms both task-specialized algorithms and the previous RL methods.
Anthology ID:
2022.findings-emnlp.518
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6969–6991
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.518
DOI:
10.18653/v1/2022.findings-emnlp.518
Bibkey:
Cite (ACL):
Han Guo, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2022. Efficient (Soft) Q-Learning for Text Generation with Limited Good Data. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6969–6991, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Efficient (Soft) Q-Learning for Text Generation with Limited Good Data (Guo et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.findings-emnlp.518.pdf