CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking

Haoning Zhang, Junwei Bao, Haipeng Sun, Huaishao Luo, Wenye Li, Shuguang Cui


Abstract
Few-shot dialogue state tracking (DST) is a realistic problem that trains the DST model with limited labeled data. Existing few-shot methods mainly transfer knowledge learned from external labeled dialogue data (e.g., from question answering, dialogue summarization, machine reading comprehension tasks, etc.) into DST, whereas collecting a large amount of external labeled data is laborious, and the external data may not effectively contribute to the DST-specific task. In this paper, we propose a few-shot DST framework called CSS, which Combines Self-training and Self-supervised learning methods. The unlabeled data of the DST task is incorporated into the self-training iterations, where the pseudo labels are predicted by a DST model trained on limited labeled data in advance. Besides, a contrastive self-supervised method is used to learn better representations, where the data is augmented by the dropout operation to train the model. Experimental results on the MultiWOZ dataset show that our proposed CSS achieves competitive performance in several few-shot scenarios.
Anthology ID:
2022.aacl-short.37
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2022
Address:
Online only
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
302–310
Language:
URL:
https://aclanthology.org/2022.aacl-short.37
DOI:
Bibkey:
Cite (ACL):
Haoning Zhang, Junwei Bao, Haipeng Sun, Huaishao Luo, Wenye Li, and Shuguang Cui. 2022. CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 302–310, Online only. Association for Computational Linguistics.
Cite (Informal):
CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking (Zhang et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.aacl-short.37.pdf
Software:
 2022.aacl-short.37.Software.zip