Abstract
Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the “bursty” distribution of such words. In this paper, we augment a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus; MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.- Anthology ID:
 - P17-1137
 - Volume:
 - Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
 - Month:
 - July
 - Year:
 - 2017
 - Address:
 - Vancouver, Canada
 - Editors:
 - Regina Barzilay, Min-Yen Kan
 - Venue:
 - ACL
 - SIG:
 - Publisher:
 - Association for Computational Linguistics
 - Note:
 - Pages:
 - 1492–1502
 - Language:
 - URL:
 - https://aclanthology.org/P17-1137
 - DOI:
 - 10.18653/v1/P17-1137
 - Cite (ACL):
 - Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2017. Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1492–1502, Vancouver, Canada. Association for Computational Linguistics.
 - Cite (Informal):
 - Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling (Kawakami et al., ACL 2017)
 - PDF:
 - https://preview.aclanthology.org/ingest-acl-2023-videos/P17-1137.pdf
 - Data
 - WikiText-2