Learning Spoken Language Representations with Neural Lattice Language Modeling

Chao-Wei Huang, Yun-Nung Chen


Abstract
Pre-trained language models have achieved huge improvement on many NLP tasks. However, these methods are usually designed for written text, so they do not consider the properties of spoken language. Therefore, this paper aims at generalizing the idea of language model pre-training to lattices generated by recognition systems. We propose a framework that trains neural lattice language models to provide contextualized representations for spoken language understanding tasks. The proposed two-stage pre-training approach reduces the demands of speech data and has better efficiency. Experiments on intent detection and dialogue act recognition datasets demonstrate that our proposed method consistently outperforms strong baselines when evaluated on spoken inputs. The code is available at https://github.com/MiuLab/Lattice-ELMo.
Anthology ID:
2020.acl-main.347
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3764–3769
Language:
URL:
https://aclanthology.org/2020.acl-main.347
DOI:
10.18653/v1/2020.acl-main.347
Bibkey:
Cite (ACL):
Chao-Wei Huang and Yun-Nung Chen. 2020. Learning Spoken Language Representations with Neural Lattice Language Modeling. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3764–3769, Online. Association for Computational Linguistics.
Cite (Informal):
Learning Spoken Language Representations with Neural Lattice Language Modeling (Huang & Chen, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2020.acl-main.347.pdf
Video:
 http://slideslive.com/38929295
Code
 MiuLab/Lattice-ELMo +  additional community code
Data
ATISSNIPS