Making Pre-trained Language Models Better Learn Few-Shot Spoken Language Understanding in More Practical Scenarios

Yufan Wang, Jie Mei, Bowei Zou, Rui Fan, Tingting He, Ai Ti Aw


Abstract
Most previous few-shot Spoken Language Understanding (SLU) models typically need to be trained on a set of data-rich source domains and adapt to the target domain with a few examples. In this paper, we explore a more practical scenario for few-shot SLU, in which we only assume access to a pre-trained language model and a few labeled examples without any other source domain data. We concentrate on understanding how far the few-shot SLU could be pushed in this setting. To this end, we develop a prompt-based intent detection model in few-shot settings, which leverages the BERT original pre-training next sentence prediction task and the prompt template to detect the user’s intent. For slot filling, we propose an approach of reconstructing slot labels, which reduces the training complexity by reducing the number of slot labels in few-shot settings. To evaluate the few-shot SLU for a more practical scenario, we present two benchmarks, FewShotATIS and FewShotSNIPS. And a dynamic sampling strategy is designed to construct the two datasets according to the learning difficulty of each intent and slot. Experiments on FewShotATIS and FewShotSNIPS demonstrate that our proposed model achieves state-of-the-art performance.
Anthology ID:
2023.findings-acl.853
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13508–13523
Language:
URL:
https://aclanthology.org/2023.findings-acl.853
DOI:
10.18653/v1/2023.findings-acl.853
Bibkey:
Cite (ACL):
Yufan Wang, Jie Mei, Bowei Zou, Rui Fan, Tingting He, and Ai Ti Aw. 2023. Making Pre-trained Language Models Better Learn Few-Shot Spoken Language Understanding in More Practical Scenarios. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13508–13523, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Making Pre-trained Language Models Better Learn Few-Shot Spoken Language Understanding in More Practical Scenarios (Wang et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.findings-acl.853.pdf