Abstract
Self-normalizing discriminative models approximate the normalized probability of a class without having to compute the partition function. In the context of language modeling, this property is particularly appealing as it may significantly reduce run-times due to large word vocabularies. In this study, we provide a comprehensive investigation of language modeling self-normalization. First, we theoretically analyze the inherent self-normalization properties of Noise Contrastive Estimation (NCE) language models. Then, we compare them empirically to softmax-based approaches, which are self-normalized using explicit regularization, and suggest a hybrid model with compelling properties. Finally, we uncover a surprising negative correlation between self-normalization and perplexity across the board, as well as some regularity in the observed errors, which may potentially be used for improving self-normalization algorithms in the future.- Anthology ID:
- C18-1065
- Volume:
- Proceedings of the 27th International Conference on Computational Linguistics
- Month:
- August
- Year:
- 2018
- Address:
- Santa Fe, New Mexico, USA
- Editors:
- Emily M. Bender, Leon Derczynski, Pierre Isabelle
- Venue:
- COLING
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 764–773
- Language:
- URL:
- https://aclanthology.org/C18-1065
- DOI:
- Cite (ACL):
- Jacob Goldberger and Oren Melamud. 2018. Self-Normalization Properties of Language Modeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 764–773, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
- Cite (Informal):
- Self-Normalization Properties of Language Modeling (Goldberger & Melamud, COLING 2018)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/C18-1065.pdf