Improving Neural Language Models by Segmenting, Attending, and Predicting the Future

Hongyin Luo, Lan Jiang, Yonatan Belinkov, James Glass


Abstract
Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.
Anthology ID:
P19-1144
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1483–1493
Language:
URL:
https://aclanthology.org/P19-1144
DOI:
10.18653/v1/P19-1144
Bibkey:
Cite (ACL):
Hongyin Luo, Lan Jiang, Yonatan Belinkov, and James Glass. 2019. Improving Neural Language Models by Segmenting, Attending, and Predicting the Future. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1483–1493, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Improving Neural Language Models by Segmenting, Attending, and Predicting the Future (Luo et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/P19-1144.pdf
Code
 luohongyin/PILM
Data
WikiText-103WikiText-2