Token-level Sequence Labeling for Spoken Language Understanding using Compositional End-to-End Models

Siddhant Arora, Siddharth Dalmia, Brian Yan, Florian Metze, Alan W Black, Shinji Watanabe


Abstract
End-to-end spoken language understanding (SLU) systems are gaining popularity over cascaded approaches due to their simplicity and ability to avoid error propagation. However, these systems model sequence labeling as a sequence prediction task causing a divergence from its well-established token-level tagging formulation. We build compositional end-to-end SLU systems that explicitly separate the added complexity of recognizing spoken mentions in SLU from the NLU task of sequence labeling. By relying on intermediate decoders trained for ASR, our end-to-end systems transform the input modality from speech to token-level representations that can be used in the traditional sequence labeling framework. This composition of ASR and NLU formulations in our end-to-end SLU system offers direct compatibility with pre-trained ASR and NLU systems, allows performance monitoring of individual components and enables the use of globally normalized losses like CRF, making them attractive in practical scenarios. Our models outperform both cascaded and direct end-to-end models on a labeling task of named entity recognition across SLU benchmarks.
Anthology ID:
2022.findings-emnlp.396
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5419–5429
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.396
DOI:
10.18653/v1/2022.findings-emnlp.396
Bibkey:
Cite (ACL):
Siddhant Arora, Siddharth Dalmia, Brian Yan, Florian Metze, Alan W Black, and Shinji Watanabe. 2022. Token-level Sequence Labeling for Spoken Language Understanding using Compositional End-to-End Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5419–5429, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Token-level Sequence Labeling for Spoken Language Understanding using Compositional End-to-End Models (Arora et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2022.findings-emnlp.396.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-1/2022.findings-emnlp.396.mp4