Towards Unified Spoken Language Understanding Decoding via Label-aware Compact Linguistics Representations

Zhihong Zhu, Xuxin Cheng, Zhiqi Huang, Dongsheng Chen, Yuexian Zou


Abstract
Joint intent detection and slot filling models have shown promising success in recent years due to the high correlations between the two tasks. However, previous works independently decode the two tasks, which could result in misaligned predictions for both tasks. To address this shortcoming, we propose a novel method named Label-aware Compact Linguistics Representation (LCLR), which leverages label embeddings to jointly guide the decoding process. Concretely, LCLR projects both task-specific hidden states into a joint label latent space, where both task-specific hidden states could be concisely represented as linear combinations of label embeddings. Such feature decomposition of task-specific hidden states increases the representing power for the linguistics of utterance. Extensive experiments on two single- and multi-intent SLU benchmarks prove that LCLR can learn more discriminative label information than previous separate decoders, and consistently outperform previous state-of-the-art methods across all metrics. More encouragingly, LCLR can be applied to boost the performance of existing approaches, making it easy to be incorporated into any existing SLU models.
Anthology ID:
2023.findings-acl.793
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12523–12531
Language:
URL:
https://aclanthology.org/2023.findings-acl.793
DOI:
10.18653/v1/2023.findings-acl.793
Bibkey:
Cite (ACL):
Zhihong Zhu, Xuxin Cheng, Zhiqi Huang, Dongsheng Chen, and Yuexian Zou. 2023. Towards Unified Spoken Language Understanding Decoding via Label-aware Compact Linguistics Representations. In Findings of the Association for Computational Linguistics: ACL 2023, pages 12523–12531, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Towards Unified Spoken Language Understanding Decoding via Label-aware Compact Linguistics Representations (Zhu et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.findings-acl.793.pdf