Putting Words in Context: LSTM Language Models and Lexical Ambiguity

Laura Aina, Kristina Gulordava, Gemma Boleda


Abstract
In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information.
Anthology ID:
P19-1324
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3342–3348
Language:
URL:
https://aclanthology.org/P19-1324
DOI:
10.18653/v1/P19-1324
Bibkey:
Cite (ACL):
Laura Aina, Kristina Gulordava, and Gemma Boleda. 2019. Putting Words in Context: LSTM Language Models and Lexical Ambiguity. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3342–3348, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Putting Words in Context: LSTM Language Models and Lexical Ambiguity (Aina et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/P19-1324.pdf
Code
 amore-upf/LSTM_ambiguity