Abstract
Zero-shot dialogue state tracking (DST) seeks to enable dialogue systems to transition to unfamiliar domains without manual annotation or extensive retraining. Prior research has approached this objective by embedding prompts into language models (LMs). Common methodologies include integrating prompts at the input layer or introducing learnable variables at each transformer layer. Nonetheless, each strategy exhibits inherent limitations. Prompts integrated at the input layer risk underutilization, with their impact potentially diminishing across successive transformer layers. Conversely, the addition of learnable variables to each layer can complicate the training process and increase inference latency. To tackle the issues mentioned above, this paper proposes Dual Low-Rank Adaptation (DualLoRA), a plug-and-play architecture designed for zero-shot DST. DualLoRA incorporates two distinct Low-Rank Adaptation (LoRA) components, targeting both dialogue context processing and prompt optimization, to ensure the comprehensive influence of prompts throughout the transformer model layers. This is achieved without incurring additional inference latency, showcasing an efficient integration into existing architectures. Through rigorous evaluation on the MultiWOZ and SGD datasets, DualLoRA demonstrates notable improvements across multiple domains, outperforming traditional baseline methods in zero-shot settings.- Anthology ID:
- 2024.acl-long.312
- Volume:
- Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5746–5765
- Language:
- URL:
- https://aclanthology.org/2024.acl-long.312
- DOI:
- 10.18653/v1/2024.acl-long.312
- Cite (ACL):
- Xiang Luo, Zhiwen Tang, Jin Wang, and Xuejie Zhang. 2024. Zero-Shot Cross-Domain Dialogue State Tracking via Dual Low-Rank Adaptation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5746–5765, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- Zero-Shot Cross-Domain Dialogue State Tracking via Dual Low-Rank Adaptation (Luo et al., ACL 2024)
- PDF:
- https://preview.aclanthology.org/autopr/2024.acl-long.312.pdf