Learning to Discover, Ground and Use Words with Segmental Neural Language Models

Kazuya Kawakami, Chris Dyer, Phil Blunsom


Abstract
We propose a segmental neural language model that combines the generalization power of neural networks with the ability to discover word-like units that are latent in unsegmented character sequences. In contrast to previous segmentation models that treat word segmentation as an isolated task, our model unifies word discovery, learning how words fit together to form sentences, and, by conditioning the model on visual context, how words’ meanings ground in representations of nonlinguistic modalities. Experiments show that the unconditional model learns predictive distributions better than character LSTM models, discovers words competitively with nonparametric Bayesian word segmentation models, and that modeling language conditional on visual context improves performance on both.
Anthology ID:
P19-1645
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6429–6441
Language:
URL:
https://aclanthology.org/P19-1645
DOI:
10.18653/v1/P19-1645
Bibkey:
Cite (ACL):
Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2019. Learning to Discover, Ground and Use Words with Segmental Neural Language Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6429–6441, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Learning to Discover, Ground and Use Words with Segmental Neural Language Models (Kawakami et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/P19-1645.pdf
Data
MS COCO