@inproceedings{wan-etal-2023-pip,
    title = "{PIP}: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation",
    author = "Wan, Yixin  and
      Huang, Kuan-Hao  and
      Chang, Kai-Wei",
    editor = "Rogers, Anna  and
      Boyd-Graber, Jordan  and
      Okazaki, Naoaki",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.findings-acl.659/",
    doi = "10.18653/v1/2023.findings-acl.659",
    pages = "10372--10380",
    abstract = "Syntactically controlled paraphrase generation requires language models to generate paraphrases for sentences according to specific syntactic structures. Existing fine-tuning methods on this task is costly, as all parameters of the model need to be updated during the training process. Inspired by recent studies on parameter-efficient learning, we propose Parse-Instructed Prefix (PIP), a novel adaptation of prefix-tuning to tune large pre-trained language models on syntactically controlled paraphrase generation task in a low-data setting with significantly less training cost. We introduce two methods to instruct a model{'}s encoder prefix to capture syntax-related knowledge: direct initiation (PIP-Direct) and indirect optimization (PIP-Indirect). Comparing to traditional fine-tuning methods for this task, PIP is a compute-efficient alternative with 10 times less learnable parameters. Comparing to existing prefix-tuning methods, PIP excels at capturing syntax control information, achieving significantly higher performance at the same level of learnable parameter count."
}Markdown (Informal)
[PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase Generation](https://preview.aclanthology.org/ingest-emnlp/2023.findings-acl.659/) (Wan et al., Findings 2023)
ACL