IAPT: Instance-Aware Prompt Tuning for Large Language Models
Wei Zhu, Aaron Tian, Congrui Yin, Yuan Ni, Xiaoling Wang, Guotong Xie
Abstract
Soft prompt tuning is a widely studied parameter-efficient fine-tuning method. However, it has a clear drawback: many soft tokens must be inserted into the input sequences to guarantee downstream performance. As a result, soft prompt tuning is less considered than Low-rank adaptation (LoRA) in the large language modeling (LLM) era. In this work, we propose a novel prompt tuning method, Instruction-Aware Prompt Tuning (IAPT), that requires only four soft tokens. First, we install a parameter-efficient soft prompt generator at each Transformer layer to generate idiosyncratic soft prompts for each input instruction. The generated soft prompts can be seen as a semantic summary of the input instructions and can effectively guide the output generation. Second, the soft prompt generators are modules with a bottleneck architecture consisting of a self-attention pooling operation, two linear projections, and an activation function. Pilot experiments show that prompt generators at different Transformer layers require different activation functions. Thus, we propose to learn the idiosyncratic activation functions for prompt generators automatically with the help of rational functions. We have conducted experiments on various tasks, and the experimental results demonstrate that (a) our IAPT method can outperform the recent baselines with comparable tunable parameters. (b) Our IAPT method is more efficient than LoRA under the single-backbone multi-tenant setting.- Anthology ID:
- 2024.acl-long.771
- Volume:
- Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 14285–14304
- Language:
- URL:
- https://aclanthology.org/2024.acl-long.771
- DOI:
- 10.18653/v1/2024.acl-long.771
- Cite (ACL):
- Wei Zhu, Aaron Tian, Congrui Yin, Yuan Ni, Xiaoling Wang, and Guotong Xie. 2024. IAPT: Instance-Aware Prompt Tuning for Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14285–14304, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- IAPT: Instance-Aware Prompt Tuning for Large Language Models (Zhu et al., ACL 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.acl-long.771.pdf