Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System

Shimin Li, Xiaotian Zhang, Yanjun Zheng, Linyang Li, Xipeng Qiu


Abstract
Dialogue data in real scenarios tend to be sparsely available, rendering data-starved end-to-end dialogue systems trained inadequately. We discover that data utilization efficiency in low-resource scenarios can be enhanced by mining alignment information uncertain utterance and deterministic dialogue state. Therefore, we innovatively implement dual learning in task-oriented dialogues to exploit the correlation of heterogeneous data. In addition, the one-to-one duality is converted into a multijugate duality to reduce the influence of spurious correlations in dual training for generalization. Without introducing additional parameters, our method could be implemented in arbitrary networks. Extensive empirical analyses demonstrate that our proposed method improves the effectiveness of end-to-end task-oriented dialogue systems under multiple benchmarks and obtains state-of-the-art results in low-resource scenarios.
Anthology ID:
2023.findings-acl.702
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11037–11053
Language:
URL:
https://aclanthology.org/2023.findings-acl.702
DOI:
10.18653/v1/2023.findings-acl.702
Bibkey:
Cite (ACL):
Shimin Li, Xiaotian Zhang, Yanjun Zheng, Linyang Li, and Xipeng Qiu. 2023. Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11037–11053, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System (Li et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.findings-acl.702.pdf