Vector-Quantized Prompt Learning for Paraphrase Generation

Haotian Luo, Yixin Liu, Peidong Liu, Xianggen Liu


Abstract
Deep generative modeling of natural languages has achieved many successes, such as producing fluent sentences and translating from one language into another. However, the development of generative modeling techniques for paraphrase generation still lags behind largely due to the challenges in addressing the complex conflicts between expression diversity and semantic preservation. This paper proposes to generate diverse and high-quality paraphrases by exploiting the pre-trained models with instance-dependent prompts. To learn generalizable prompts, we assume that the number of abstract transforming patterns of paraphrase generation (governed by prompts) is finite and usually not large. Therefore, we present vector-quantized prompts as the cues to control the generation of pre-trained models. Extensive experiments demonstrate that the proposed method achieves new state-of-art results on three benchmark datasets, including Quora, Wikianswers, and MSCOCO. We will release all the code upon acceptance.
Anthology ID:
2023.findings-emnlp.893
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13389–13398
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.893
DOI:
10.18653/v1/2023.findings-emnlp.893
Bibkey:
Cite (ACL):
Haotian Luo, Yixin Liu, Peidong Liu, and Xianggen Liu. 2023. Vector-Quantized Prompt Learning for Paraphrase Generation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13389–13398, Singapore. Association for Computational Linguistics.
Cite (Informal):
Vector-Quantized Prompt Learning for Paraphrase Generation (Luo et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.893.pdf