Leveraging Self-Attention for Input-Dependent Soft Prompting in LLMs

Ananth Muppidi, Abhilash Nandy, Sambaran Bandyopadhyay


Abstract
The performance of large language models in domain-specific tasks necessitates fine-tuning, which is computationally expensive and technically challenging. This paper focuses on parameter-efficient fine-tuning using soft prompting, a promising approach that adapts pre-trained models to downstream tasks by learning a small set of parameters. We propose a novel Input Dependent Soft Prompting technique with a self-Attention Mechanism (ID-SPAM) that generates soft prompts based on the input tokens and attends different tokens with varying importance. Our method is simple and efficient, keeping the number of trainable parameters small. We show the merits of the proposed approach compared to state-of-the-art techniques on various tasks and show the improved zero shot domain transfer capability.
Anthology ID:
2025.acl-short.74
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
960–969
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.74/
DOI:
Bibkey:
Cite (ACL):
Ananth Muppidi, Abhilash Nandy, and Sambaran Bandyopadhyay. 2025. Leveraging Self-Attention for Input-Dependent Soft Prompting in LLMs. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 960–969, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Leveraging Self-Attention for Input-Dependent Soft Prompting in LLMs (Muppidi et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.74.pdf