Abstract
This study addresses the problem of identifying the meaning of unknown words or entities in a discourse with respect to the word embedding approaches used in neural language models. We proposed a method for on-the-fly construction and exploitation of word embeddings in both the input and output layers of a neural model by tracking contexts. This extends the dynamic entity representation used in Kobayashi et al. (2016) and incorporates a copy mechanism proposed independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we construct a new task and dataset called Anonymized Language Modeling for evaluating the ability to capture word meanings while reading. Experiments conducted using our novel dataset show that the proposed variant of RNN language model outperformed the baseline model. Furthermore, the experiments also demonstrate that dynamic updates of an output layer help a model predict reappearing entities, whereas those of an input layer are effective to predict words following reappearing entities.- Anthology ID:
- I17-1048
- Volume:
- Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
- Month:
- November
- Year:
- 2017
- Address:
- Taipei, Taiwan
- Venue:
- IJCNLP
- SIG:
- Publisher:
- Asian Federation of Natural Language Processing
- Note:
- Pages:
- 473–483
- Language:
- URL:
- https://aclanthology.org/I17-1048
- DOI:
- Cite (ACL):
- Sosuke Kobayashi, Naoaki Okazaki, and Kentaro Inui. 2017. A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a Discourse. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 473–483, Taipei, Taiwan. Asian Federation of Natural Language Processing.
- Cite (Informal):
- A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a Discourse (Kobayashi et al., IJCNLP 2017)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/I17-1048.pdf
- Code
- soskek/dynamic_neural_text_model